2026-04-08 00:00:06.927194 | Job console starting 2026-04-08 00:00:06.953622 | Updating git repos 2026-04-08 00:00:07.036189 | Cloning repos into workspace 2026-04-08 00:00:07.387512 | Restoring repo states 2026-04-08 00:00:07.436778 | Merging changes 2026-04-08 00:00:07.436796 | Checking out repos 2026-04-08 00:00:08.061614 | Preparing playbooks 2026-04-08 00:00:08.901697 | Running Ansible setup 2026-04-08 00:00:15.976758 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-08 00:00:17.479674 | 2026-04-08 00:00:17.479809 | PLAY [Base pre] 2026-04-08 00:00:17.513316 | 2026-04-08 00:00:17.513470 | TASK [Setup log path fact] 2026-04-08 00:00:17.534347 | orchestrator | ok 2026-04-08 00:00:17.564256 | 2026-04-08 00:00:17.564406 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-08 00:00:17.616710 | orchestrator | ok 2026-04-08 00:00:17.639925 | 2026-04-08 00:00:17.640037 | TASK [emit-job-header : Print job information] 2026-04-08 00:00:17.690086 | # Job Information 2026-04-08 00:00:17.690293 | Ansible Version: 2.16.14 2026-04-08 00:00:17.690358 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-08 00:00:17.690397 | Pipeline: periodic-midnight 2026-04-08 00:00:17.690421 | Executor: 521e9411259a 2026-04-08 00:00:17.690443 | Triggered by: https://github.com/osism/testbed 2026-04-08 00:00:17.690465 | Event ID: 605c59d85f174b4ca3197f00f9d26f38 2026-04-08 00:00:17.708066 | 2026-04-08 00:00:17.708181 | LOOP [emit-job-header : Print node information] 2026-04-08 00:00:17.835804 | orchestrator | ok: 2026-04-08 00:00:17.836097 | orchestrator | # Node Information 2026-04-08 00:00:17.836140 | orchestrator | Inventory Hostname: orchestrator 2026-04-08 00:00:17.836167 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-08 00:00:17.836190 | orchestrator | Username: zuul-testbed04 2026-04-08 00:00:17.836211 | orchestrator | Distro: Debian 12.13 2026-04-08 00:00:17.836235 | orchestrator | Provider: static-testbed 2026-04-08 00:00:17.836256 | orchestrator | Region: 2026-04-08 00:00:17.836276 | orchestrator | Label: testbed-orchestrator 2026-04-08 00:00:17.836296 | orchestrator | Product Name: OpenStack Nova 2026-04-08 00:00:17.836316 | orchestrator | Interface IP: 81.163.193.140 2026-04-08 00:00:17.854568 | 2026-04-08 00:00:17.854684 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-08 00:00:18.539197 | orchestrator -> localhost | changed 2026-04-08 00:00:18.580471 | 2026-04-08 00:00:18.580591 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-08 00:00:21.009921 | orchestrator -> localhost | changed 2026-04-08 00:00:21.046339 | 2026-04-08 00:00:21.046547 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-08 00:00:21.315831 | orchestrator -> localhost | ok 2026-04-08 00:00:21.322554 | 2026-04-08 00:00:21.322659 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-08 00:00:21.361294 | orchestrator | ok 2026-04-08 00:00:21.386065 | orchestrator | included: /var/lib/zuul/builds/9fcaa67de16142939af440d960a751f3/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-08 00:00:21.394368 | 2026-04-08 00:00:21.394469 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-08 00:00:25.128185 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-08 00:00:25.128463 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/9fcaa67de16142939af440d960a751f3/work/9fcaa67de16142939af440d960a751f3_id_rsa 2026-04-08 00:00:25.128500 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/9fcaa67de16142939af440d960a751f3/work/9fcaa67de16142939af440d960a751f3_id_rsa.pub 2026-04-08 00:00:25.128521 | orchestrator -> localhost | The key fingerprint is: 2026-04-08 00:00:25.128541 | orchestrator -> localhost | SHA256:q69kjhMbklpY01ihV9hw7TZBb7TbrytQcRZJNoQ+yhU zuul-build-sshkey 2026-04-08 00:00:25.128560 | orchestrator -> localhost | The key's randomart image is: 2026-04-08 00:00:25.128587 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-08 00:00:25.128607 | orchestrator -> localhost | | o=oo. .+*o | 2026-04-08 00:00:25.128626 | orchestrator -> localhost | | .oo. ooEo+. | 2026-04-08 00:00:25.128643 | orchestrator -> localhost | | .+. . o+= | 2026-04-08 00:00:25.128660 | orchestrator -> localhost | | +.. +.=o | 2026-04-08 00:00:25.128677 | orchestrator -> localhost | | o o oS=... | 2026-04-08 00:00:25.128696 | orchestrator -> localhost | |. + o +. . | 2026-04-08 00:00:25.128712 | orchestrator -> localhost | | o . +o .. . | 2026-04-08 00:00:25.128729 | orchestrator -> localhost | |. o= . . . | 2026-04-08 00:00:25.128746 | orchestrator -> localhost | | ..+o. .o. | 2026-04-08 00:00:25.128799 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-08 00:00:25.128846 | orchestrator -> localhost | ok: Runtime: 0:00:02.610728 2026-04-08 00:00:25.137433 | 2026-04-08 00:00:25.137533 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-08 00:00:25.199630 | orchestrator | ok 2026-04-08 00:00:25.228593 | orchestrator | included: /var/lib/zuul/builds/9fcaa67de16142939af440d960a751f3/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-08 00:00:25.253986 | 2026-04-08 00:00:25.254091 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-08 00:00:25.312727 | orchestrator | skipping: Conditional result was False 2026-04-08 00:00:25.322208 | 2026-04-08 00:00:25.322308 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-08 00:00:26.326657 | orchestrator | changed 2026-04-08 00:00:26.332925 | 2026-04-08 00:00:26.333009 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-08 00:00:26.710087 | orchestrator | ok 2026-04-08 00:00:26.723293 | 2026-04-08 00:00:26.723402 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-08 00:00:27.326058 | orchestrator | ok 2026-04-08 00:00:27.345318 | 2026-04-08 00:00:27.345447 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-08 00:00:27.922609 | orchestrator | ok 2026-04-08 00:00:27.927744 | 2026-04-08 00:00:27.927823 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-08 00:00:27.987594 | orchestrator | skipping: Conditional result was False 2026-04-08 00:00:27.993820 | 2026-04-08 00:00:27.993904 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-08 00:00:29.337950 | orchestrator -> localhost | changed 2026-04-08 00:00:29.362232 | 2026-04-08 00:00:29.362359 | TASK [add-build-sshkey : Add back temp key] 2026-04-08 00:00:30.405694 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/9fcaa67de16142939af440d960a751f3/work/9fcaa67de16142939af440d960a751f3_id_rsa (zuul-build-sshkey) 2026-04-08 00:00:30.405876 | orchestrator -> localhost | ok: Runtime: 0:00:00.059496 2026-04-08 00:00:30.411822 | 2026-04-08 00:00:30.411915 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-08 00:00:31.098762 | orchestrator | ok 2026-04-08 00:00:31.105515 | 2026-04-08 00:00:31.105614 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-08 00:00:31.142543 | orchestrator | skipping: Conditional result was False 2026-04-08 00:00:31.273858 | 2026-04-08 00:00:31.273972 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-08 00:00:32.037792 | orchestrator | ok 2026-04-08 00:00:32.081421 | 2026-04-08 00:00:32.081544 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-08 00:00:32.140254 | orchestrator | ok 2026-04-08 00:00:32.145970 | 2026-04-08 00:00:32.146052 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-08 00:00:33.049070 | orchestrator -> localhost | ok 2026-04-08 00:00:33.056566 | 2026-04-08 00:00:33.056674 | TASK [validate-host : Collect information about the host] 2026-04-08 00:00:34.611145 | orchestrator | ok 2026-04-08 00:00:34.636848 | 2026-04-08 00:00:34.636950 | TASK [validate-host : Sanitize hostname] 2026-04-08 00:00:34.714760 | orchestrator | ok 2026-04-08 00:00:34.719145 | 2026-04-08 00:00:34.719229 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-08 00:00:35.863467 | orchestrator -> localhost | changed 2026-04-08 00:00:35.868865 | 2026-04-08 00:00:35.868948 | TASK [validate-host : Collect information about zuul worker] 2026-04-08 00:00:36.526272 | orchestrator | ok 2026-04-08 00:00:36.530748 | 2026-04-08 00:00:36.530858 | TASK [validate-host : Write out all zuul information for each host] 2026-04-08 00:00:38.104339 | orchestrator -> localhost | changed 2026-04-08 00:00:38.112671 | 2026-04-08 00:00:38.112760 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-08 00:00:38.419061 | orchestrator | ok 2026-04-08 00:00:38.423994 | 2026-04-08 00:00:38.424081 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-08 00:02:03.842042 | orchestrator | changed: 2026-04-08 00:02:03.846400 | orchestrator | .d..t...... src/ 2026-04-08 00:02:03.846497 | orchestrator | .d..t...... src/github.com/ 2026-04-08 00:02:03.846773 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-08 00:02:03.846865 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-08 00:02:03.846892 | orchestrator | RedHat.yml 2026-04-08 00:02:03.863340 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-08 00:02:03.863359 | orchestrator | RedHat.yml 2026-04-08 00:02:03.863411 | orchestrator | = 2.2.0"... 2026-04-08 00:02:14.747355 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-08 00:02:14.762709 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-04-08 00:02:15.115884 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-08 00:02:15.683490 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-08 00:02:15.740841 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-08 00:02:16.196957 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-08 00:02:16.466002 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-08 00:02:17.216923 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-08 00:02:17.216998 | orchestrator | 2026-04-08 00:02:17.217006 | orchestrator | Providers are signed by their developers. 2026-04-08 00:02:17.217012 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-08 00:02:17.217025 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-08 00:02:17.217096 | orchestrator | 2026-04-08 00:02:17.217103 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-08 00:02:17.217119 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-08 00:02:17.217124 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-08 00:02:17.217135 | orchestrator | you run "tofu init" in the future. 2026-04-08 00:02:17.217538 | orchestrator | 2026-04-08 00:02:17.217582 | orchestrator | OpenTofu has been successfully initialized! 2026-04-08 00:02:17.217603 | orchestrator | 2026-04-08 00:02:17.217607 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-08 00:02:17.217611 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-08 00:02:17.217616 | orchestrator | should now work. 2026-04-08 00:02:17.217620 | orchestrator | 2026-04-08 00:02:17.217624 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-08 00:02:17.217628 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-08 00:02:17.217639 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-08 00:02:17.402945 | orchestrator | Created and switched to workspace "ci"! 2026-04-08 00:02:17.403016 | orchestrator | 2026-04-08 00:02:17.403028 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-08 00:02:17.403039 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-08 00:02:17.403085 | orchestrator | for this configuration. 2026-04-08 00:02:17.557728 | orchestrator | ci.auto.tfvars 2026-04-08 00:02:17.971508 | orchestrator | default_custom.tf 2026-04-08 00:02:19.471449 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-08 00:02:20.523957 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 2s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-08 00:02:20.806318 | orchestrator | 2026-04-08 00:02:20.806419 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-08 00:02:20.806435 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-08 00:02:20.806447 | orchestrator | + create 2026-04-08 00:02:20.806459 | orchestrator | <= read (data resources) 2026-04-08 00:02:20.806471 | orchestrator | 2026-04-08 00:02:20.806483 | orchestrator | OpenTofu will perform the following actions: 2026-04-08 00:02:20.806519 | orchestrator | 2026-04-08 00:02:20.806531 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-08 00:02:20.806542 | orchestrator | # (config refers to values not yet known) 2026-04-08 00:02:20.806554 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-08 00:02:20.806564 | orchestrator | + checksum = (known after apply) 2026-04-08 00:02:20.806576 | orchestrator | + created_at = (known after apply) 2026-04-08 00:02:20.806587 | orchestrator | + file = (known after apply) 2026-04-08 00:02:20.806598 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.806636 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.806648 | orchestrator | + min_disk_gb = (known after apply) 2026-04-08 00:02:20.806659 | orchestrator | + min_ram_mb = (known after apply) 2026-04-08 00:02:20.806670 | orchestrator | + most_recent = true 2026-04-08 00:02:20.806681 | orchestrator | + name = (known after apply) 2026-04-08 00:02:20.806691 | orchestrator | + protected = (known after apply) 2026-04-08 00:02:20.806702 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.806717 | orchestrator | + schema = (known after apply) 2026-04-08 00:02:20.806728 | orchestrator | + size_bytes = (known after apply) 2026-04-08 00:02:20.806739 | orchestrator | + tags = (known after apply) 2026-04-08 00:02:20.806749 | orchestrator | + updated_at = (known after apply) 2026-04-08 00:02:20.806760 | orchestrator | } 2026-04-08 00:02:20.806789 | orchestrator | 2026-04-08 00:02:20.806800 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-08 00:02:20.806811 | orchestrator | # (config refers to values not yet known) 2026-04-08 00:02:20.806822 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-08 00:02:20.806833 | orchestrator | + checksum = (known after apply) 2026-04-08 00:02:20.806843 | orchestrator | + created_at = (known after apply) 2026-04-08 00:02:20.806878 | orchestrator | + file = (known after apply) 2026-04-08 00:02:20.806889 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.806905 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.806925 | orchestrator | + min_disk_gb = (known after apply) 2026-04-08 00:02:20.806944 | orchestrator | + min_ram_mb = (known after apply) 2026-04-08 00:02:20.806962 | orchestrator | + most_recent = true 2026-04-08 00:02:20.806981 | orchestrator | + name = (known after apply) 2026-04-08 00:02:20.806999 | orchestrator | + protected = (known after apply) 2026-04-08 00:02:20.807016 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.807034 | orchestrator | + schema = (known after apply) 2026-04-08 00:02:20.807083 | orchestrator | + size_bytes = (known after apply) 2026-04-08 00:02:20.807103 | orchestrator | + tags = (known after apply) 2026-04-08 00:02:20.807122 | orchestrator | + updated_at = (known after apply) 2026-04-08 00:02:20.807139 | orchestrator | } 2026-04-08 00:02:20.807169 | orchestrator | 2026-04-08 00:02:20.807189 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-08 00:02:20.807210 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-08 00:02:20.807222 | orchestrator | + content = (known after apply) 2026-04-08 00:02:20.807233 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-08 00:02:20.807244 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-08 00:02:20.807255 | orchestrator | + content_md5 = (known after apply) 2026-04-08 00:02:20.807266 | orchestrator | + content_sha1 = (known after apply) 2026-04-08 00:02:20.807276 | orchestrator | + content_sha256 = (known after apply) 2026-04-08 00:02:20.807287 | orchestrator | + content_sha512 = (known after apply) 2026-04-08 00:02:20.807298 | orchestrator | + directory_permission = "0777" 2026-04-08 00:02:20.807309 | orchestrator | + file_permission = "0644" 2026-04-08 00:02:20.807319 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-08 00:02:20.807330 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.807341 | orchestrator | } 2026-04-08 00:02:20.807351 | orchestrator | 2026-04-08 00:02:20.807362 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-08 00:02:20.807373 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-08 00:02:20.807383 | orchestrator | + content = (known after apply) 2026-04-08 00:02:20.807394 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-08 00:02:20.807405 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-08 00:02:20.807416 | orchestrator | + content_md5 = (known after apply) 2026-04-08 00:02:20.807426 | orchestrator | + content_sha1 = (known after apply) 2026-04-08 00:02:20.807437 | orchestrator | + content_sha256 = (known after apply) 2026-04-08 00:02:20.807460 | orchestrator | + content_sha512 = (known after apply) 2026-04-08 00:02:20.807471 | orchestrator | + directory_permission = "0777" 2026-04-08 00:02:20.807482 | orchestrator | + file_permission = "0644" 2026-04-08 00:02:20.807504 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-08 00:02:20.807515 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.807526 | orchestrator | } 2026-04-08 00:02:20.807536 | orchestrator | 2026-04-08 00:02:20.807547 | orchestrator | # local_file.inventory will be created 2026-04-08 00:02:20.807557 | orchestrator | + resource "local_file" "inventory" { 2026-04-08 00:02:20.807568 | orchestrator | + content = (known after apply) 2026-04-08 00:02:20.807578 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-08 00:02:20.807589 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-08 00:02:20.807600 | orchestrator | + content_md5 = (known after apply) 2026-04-08 00:02:20.807610 | orchestrator | + content_sha1 = (known after apply) 2026-04-08 00:02:20.807622 | orchestrator | + content_sha256 = (known after apply) 2026-04-08 00:02:20.807633 | orchestrator | + content_sha512 = (known after apply) 2026-04-08 00:02:20.807644 | orchestrator | + directory_permission = "0777" 2026-04-08 00:02:20.807654 | orchestrator | + file_permission = "0644" 2026-04-08 00:02:20.807665 | orchestrator | + filename = "inventory.ci" 2026-04-08 00:02:20.807675 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.807686 | orchestrator | } 2026-04-08 00:02:20.807696 | orchestrator | 2026-04-08 00:02:20.807707 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-08 00:02:20.807717 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-08 00:02:20.807728 | orchestrator | + content = (sensitive value) 2026-04-08 00:02:20.807738 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-08 00:02:20.807749 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-08 00:02:20.807759 | orchestrator | + content_md5 = (known after apply) 2026-04-08 00:02:20.807770 | orchestrator | + content_sha1 = (known after apply) 2026-04-08 00:02:20.807780 | orchestrator | + content_sha256 = (known after apply) 2026-04-08 00:02:20.807791 | orchestrator | + content_sha512 = (known after apply) 2026-04-08 00:02:20.807802 | orchestrator | + directory_permission = "0700" 2026-04-08 00:02:20.807812 | orchestrator | + file_permission = "0600" 2026-04-08 00:02:20.807823 | orchestrator | + filename = ".id_rsa.ci" 2026-04-08 00:02:20.807834 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.807844 | orchestrator | } 2026-04-08 00:02:20.807855 | orchestrator | 2026-04-08 00:02:20.807865 | orchestrator | # null_resource.node_semaphore will be created 2026-04-08 00:02:20.807876 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-08 00:02:20.807886 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.807897 | orchestrator | } 2026-04-08 00:02:20.807907 | orchestrator | 2026-04-08 00:02:20.807918 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-08 00:02:20.807928 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-08 00:02:20.807939 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.807949 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.807960 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.807971 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:20.807981 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.807992 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-08 00:02:20.808002 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.808013 | orchestrator | + size = 80 2026-04-08 00:02:20.808023 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.808034 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.808045 | orchestrator | } 2026-04-08 00:02:20.808109 | orchestrator | 2026-04-08 00:02:20.808120 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-08 00:02:20.808131 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-08 00:02:20.808142 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.808152 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.808163 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.808181 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:20.808191 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.808202 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-08 00:02:20.808213 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.808223 | orchestrator | + size = 80 2026-04-08 00:02:20.808234 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.808347 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.808368 | orchestrator | } 2026-04-08 00:02:20.808403 | orchestrator | 2026-04-08 00:02:20.808424 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-08 00:02:20.808442 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-08 00:02:20.808457 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.808468 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.808479 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.808490 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:20.808501 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.808511 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-08 00:02:20.808522 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.808533 | orchestrator | + size = 80 2026-04-08 00:02:20.808544 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.808555 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.808566 | orchestrator | } 2026-04-08 00:02:20.808577 | orchestrator | 2026-04-08 00:02:20.808588 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-08 00:02:20.808598 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-08 00:02:20.808609 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.808620 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.808631 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.808642 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:20.808653 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.808663 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-08 00:02:20.808674 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.808685 | orchestrator | + size = 80 2026-04-08 00:02:20.808703 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.808715 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.808726 | orchestrator | } 2026-04-08 00:02:20.808736 | orchestrator | 2026-04-08 00:02:20.808747 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-08 00:02:20.808758 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-08 00:02:20.808769 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.808779 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.808790 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.808801 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:20.808812 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.808823 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-08 00:02:20.808833 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.808844 | orchestrator | + size = 80 2026-04-08 00:02:20.808855 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.808866 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.808876 | orchestrator | } 2026-04-08 00:02:20.808887 | orchestrator | 2026-04-08 00:02:20.808898 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-08 00:02:20.808909 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-08 00:02:20.808919 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.808930 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.808941 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.808978 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:20.808989 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.809000 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-08 00:02:20.809011 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.809021 | orchestrator | + size = 80 2026-04-08 00:02:20.809032 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.809043 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.809083 | orchestrator | } 2026-04-08 00:02:20.809102 | orchestrator | 2026-04-08 00:02:20.809121 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-08 00:02:20.809140 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-08 00:02:20.809158 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.809170 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.809180 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.809191 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:20.809201 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.809212 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-08 00:02:20.809222 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.809233 | orchestrator | + size = 80 2026-04-08 00:02:20.809244 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.809254 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.809265 | orchestrator | } 2026-04-08 00:02:20.809276 | orchestrator | 2026-04-08 00:02:20.809286 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-08 00:02:20.809298 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:20.809309 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.809319 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.809330 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.809340 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.809351 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-08 00:02:20.809362 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.809373 | orchestrator | + size = 20 2026-04-08 00:02:20.809384 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.809395 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.809406 | orchestrator | } 2026-04-08 00:02:20.809416 | orchestrator | 2026-04-08 00:02:20.809427 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-08 00:02:20.809437 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:20.809448 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.809459 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.809469 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.809480 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.809490 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-08 00:02:20.809580 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.809592 | orchestrator | + size = 20 2026-04-08 00:02:20.809614 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.809625 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.809636 | orchestrator | } 2026-04-08 00:02:20.809646 | orchestrator | 2026-04-08 00:02:20.809657 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-08 00:02:20.809668 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:20.809679 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.809689 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.809700 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.809728 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.809739 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-08 00:02:20.809750 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.809770 | orchestrator | + size = 20 2026-04-08 00:02:20.809781 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.809792 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.809802 | orchestrator | } 2026-04-08 00:02:20.809813 | orchestrator | 2026-04-08 00:02:20.809824 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-08 00:02:20.809834 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:20.809845 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.809855 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.809866 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.809884 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.809894 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-08 00:02:20.809905 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.809916 | orchestrator | + size = 20 2026-04-08 00:02:20.809927 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.809937 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.809948 | orchestrator | } 2026-04-08 00:02:20.809958 | orchestrator | 2026-04-08 00:02:20.809969 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-08 00:02:20.809980 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:20.809991 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.810001 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.810012 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.810120 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.810131 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-08 00:02:20.810142 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.810153 | orchestrator | + size = 20 2026-04-08 00:02:20.810164 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.810174 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.810185 | orchestrator | } 2026-04-08 00:02:20.810196 | orchestrator | 2026-04-08 00:02:20.810206 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-08 00:02:20.810217 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:20.810228 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.810238 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.810249 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.810259 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.810270 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-08 00:02:20.810281 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.810292 | orchestrator | + size = 20 2026-04-08 00:02:20.810302 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.810313 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.810324 | orchestrator | } 2026-04-08 00:02:20.810334 | orchestrator | 2026-04-08 00:02:20.810345 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-08 00:02:20.810356 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:20.810366 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.810377 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.810388 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.810399 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.810409 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-08 00:02:20.810420 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.810430 | orchestrator | + size = 20 2026-04-08 00:02:20.810441 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.810452 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.810462 | orchestrator | } 2026-04-08 00:02:20.810473 | orchestrator | 2026-04-08 00:02:20.810484 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-08 00:02:20.810494 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:20.810512 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.810523 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.810534 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.810545 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.810555 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-08 00:02:20.810566 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.810584 | orchestrator | + size = 20 2026-04-08 00:02:20.810603 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.810623 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.810641 | orchestrator | } 2026-04-08 00:02:20.810659 | orchestrator | 2026-04-08 00:02:20.810678 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-08 00:02:20.810695 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:20.810712 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:20.810731 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.810750 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.810769 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:20.810788 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-08 00:02:20.810806 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.810827 | orchestrator | + size = 20 2026-04-08 00:02:20.810847 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:20.810868 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:20.810889 | orchestrator | } 2026-04-08 00:02:20.810908 | orchestrator | 2026-04-08 00:02:20.810925 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-08 00:02:20.810941 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-08 00:02:20.810958 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-08 00:02:20.810987 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-08 00:02:20.811005 | orchestrator | + all_metadata = (known after apply) 2026-04-08 00:02:20.811027 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.811071 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.811092 | orchestrator | + config_drive = true 2026-04-08 00:02:20.811120 | orchestrator | + created = (known after apply) 2026-04-08 00:02:20.811139 | orchestrator | + flavor_id = (known after apply) 2026-04-08 00:02:20.811151 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-08 00:02:20.811161 | orchestrator | + force_delete = false 2026-04-08 00:02:20.811172 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-08 00:02:20.811182 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.811193 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:20.811203 | orchestrator | + image_name = (known after apply) 2026-04-08 00:02:20.811213 | orchestrator | + key_pair = "testbed" 2026-04-08 00:02:20.811224 | orchestrator | + name = "testbed-manager" 2026-04-08 00:02:20.811234 | orchestrator | + power_state = "active" 2026-04-08 00:02:20.811245 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.811255 | orchestrator | + security_groups = (known after apply) 2026-04-08 00:02:20.811266 | orchestrator | + stop_before_destroy = false 2026-04-08 00:02:20.811276 | orchestrator | + updated = (known after apply) 2026-04-08 00:02:20.811298 | orchestrator | + user_data = (sensitive value) 2026-04-08 00:02:20.811308 | orchestrator | 2026-04-08 00:02:20.811320 | orchestrator | + block_device { 2026-04-08 00:02:20.811330 | orchestrator | + boot_index = 0 2026-04-08 00:02:20.811341 | orchestrator | + delete_on_termination = false 2026-04-08 00:02:20.811352 | orchestrator | + destination_type = "volume" 2026-04-08 00:02:20.811362 | orchestrator | + multiattach = false 2026-04-08 00:02:20.811373 | orchestrator | + source_type = "volume" 2026-04-08 00:02:20.811383 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:20.811404 | orchestrator | } 2026-04-08 00:02:20.811415 | orchestrator | 2026-04-08 00:02:20.811426 | orchestrator | + network { 2026-04-08 00:02:20.811436 | orchestrator | + access_network = false 2026-04-08 00:02:20.811447 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-08 00:02:20.811457 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-08 00:02:20.811468 | orchestrator | + mac = (known after apply) 2026-04-08 00:02:20.811479 | orchestrator | + name = (known after apply) 2026-04-08 00:02:20.811489 | orchestrator | + port = (known after apply) 2026-04-08 00:02:20.811500 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:20.811510 | orchestrator | } 2026-04-08 00:02:20.811521 | orchestrator | } 2026-04-08 00:02:20.811532 | orchestrator | 2026-04-08 00:02:20.811542 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-08 00:02:20.811553 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-08 00:02:20.811564 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-08 00:02:20.811574 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-08 00:02:20.811585 | orchestrator | + all_metadata = (known after apply) 2026-04-08 00:02:20.811596 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.811606 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.811617 | orchestrator | + config_drive = true 2026-04-08 00:02:20.811627 | orchestrator | + created = (known after apply) 2026-04-08 00:02:20.811638 | orchestrator | + flavor_id = (known after apply) 2026-04-08 00:02:20.811648 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-08 00:02:20.811659 | orchestrator | + force_delete = false 2026-04-08 00:02:20.811670 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-08 00:02:20.811680 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.811691 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:20.811702 | orchestrator | + image_name = (known after apply) 2026-04-08 00:02:20.811713 | orchestrator | + key_pair = "testbed" 2026-04-08 00:02:20.811723 | orchestrator | + name = "testbed-node-0" 2026-04-08 00:02:20.811734 | orchestrator | + power_state = "active" 2026-04-08 00:02:20.811745 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.811755 | orchestrator | + security_groups = (known after apply) 2026-04-08 00:02:20.811766 | orchestrator | + stop_before_destroy = false 2026-04-08 00:02:20.811776 | orchestrator | + updated = (known after apply) 2026-04-08 00:02:20.811787 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-08 00:02:20.811798 | orchestrator | 2026-04-08 00:02:20.811809 | orchestrator | + block_device { 2026-04-08 00:02:20.811819 | orchestrator | + boot_index = 0 2026-04-08 00:02:20.811830 | orchestrator | + delete_on_termination = false 2026-04-08 00:02:20.811840 | orchestrator | + destination_type = "volume" 2026-04-08 00:02:20.811851 | orchestrator | + multiattach = false 2026-04-08 00:02:20.811862 | orchestrator | + source_type = "volume" 2026-04-08 00:02:20.811872 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:20.811883 | orchestrator | } 2026-04-08 00:02:20.811893 | orchestrator | 2026-04-08 00:02:20.811904 | orchestrator | + network { 2026-04-08 00:02:20.811914 | orchestrator | + access_network = false 2026-04-08 00:02:20.811925 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-08 00:02:20.811936 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-08 00:02:20.811947 | orchestrator | + mac = (known after apply) 2026-04-08 00:02:20.811957 | orchestrator | + name = (known after apply) 2026-04-08 00:02:20.811968 | orchestrator | + port = (known after apply) 2026-04-08 00:02:20.811979 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:20.811989 | orchestrator | } 2026-04-08 00:02:20.812000 | orchestrator | } 2026-04-08 00:02:20.812010 | orchestrator | 2026-04-08 00:02:20.812021 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-08 00:02:20.812032 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-08 00:02:20.812042 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-08 00:02:20.812080 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-08 00:02:20.812091 | orchestrator | + all_metadata = (known after apply) 2026-04-08 00:02:20.812102 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.812113 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.812124 | orchestrator | + config_drive = true 2026-04-08 00:02:20.812134 | orchestrator | + created = (known after apply) 2026-04-08 00:02:20.812145 | orchestrator | + flavor_id = (known after apply) 2026-04-08 00:02:20.812155 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-08 00:02:20.812166 | orchestrator | + force_delete = false 2026-04-08 00:02:20.812177 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-08 00:02:20.812194 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.812205 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:20.812215 | orchestrator | + image_name = (known after apply) 2026-04-08 00:02:20.812226 | orchestrator | + key_pair = "testbed" 2026-04-08 00:02:20.812236 | orchestrator | + name = "testbed-node-1" 2026-04-08 00:02:20.812247 | orchestrator | + power_state = "active" 2026-04-08 00:02:20.812257 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.812268 | orchestrator | + security_groups = (known after apply) 2026-04-08 00:02:20.812278 | orchestrator | + stop_before_destroy = false 2026-04-08 00:02:20.812289 | orchestrator | + updated = (known after apply) 2026-04-08 00:02:20.812305 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-08 00:02:20.812316 | orchestrator | 2026-04-08 00:02:20.812326 | orchestrator | + block_device { 2026-04-08 00:02:20.812337 | orchestrator | + boot_index = 0 2026-04-08 00:02:20.812347 | orchestrator | + delete_on_termination = false 2026-04-08 00:02:20.812358 | orchestrator | + destination_type = "volume" 2026-04-08 00:02:20.812368 | orchestrator | + multiattach = false 2026-04-08 00:02:20.812379 | orchestrator | + source_type = "volume" 2026-04-08 00:02:20.812389 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:20.812400 | orchestrator | } 2026-04-08 00:02:20.812410 | orchestrator | 2026-04-08 00:02:20.812421 | orchestrator | + network { 2026-04-08 00:02:20.812432 | orchestrator | + access_network = false 2026-04-08 00:02:20.812442 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-08 00:02:20.812453 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-08 00:02:20.812463 | orchestrator | + mac = (known after apply) 2026-04-08 00:02:20.812474 | orchestrator | + name = (known after apply) 2026-04-08 00:02:20.812484 | orchestrator | + port = (known after apply) 2026-04-08 00:02:20.812494 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:20.812505 | orchestrator | } 2026-04-08 00:02:20.812516 | orchestrator | } 2026-04-08 00:02:20.812526 | orchestrator | 2026-04-08 00:02:20.812537 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-08 00:02:20.812547 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-08 00:02:20.812558 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-08 00:02:20.812568 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-08 00:02:20.812580 | orchestrator | + all_metadata = (known after apply) 2026-04-08 00:02:20.812591 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.812601 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.812612 | orchestrator | + config_drive = true 2026-04-08 00:02:20.812622 | orchestrator | + created = (known after apply) 2026-04-08 00:02:20.812633 | orchestrator | + flavor_id = (known after apply) 2026-04-08 00:02:20.812643 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-08 00:02:20.812654 | orchestrator | + force_delete = false 2026-04-08 00:02:20.812664 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-08 00:02:20.812675 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.812685 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:20.812709 | orchestrator | + image_name = (known after apply) 2026-04-08 00:02:20.812720 | orchestrator | + key_pair = "testbed" 2026-04-08 00:02:20.812730 | orchestrator | + name = "testbed-node-2" 2026-04-08 00:02:20.812741 | orchestrator | + power_state = "active" 2026-04-08 00:02:20.812751 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.812762 | orchestrator | + security_groups = (known after apply) 2026-04-08 00:02:20.812772 | orchestrator | + stop_before_destroy = false 2026-04-08 00:02:20.812783 | orchestrator | + updated = (known after apply) 2026-04-08 00:02:20.812793 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-08 00:02:20.812804 | orchestrator | 2026-04-08 00:02:20.812814 | orchestrator | + block_device { 2026-04-08 00:02:20.812824 | orchestrator | + boot_index = 0 2026-04-08 00:02:20.812835 | orchestrator | + delete_on_termination = false 2026-04-08 00:02:20.812845 | orchestrator | + destination_type = "volume" 2026-04-08 00:02:20.812856 | orchestrator | + multiattach = false 2026-04-08 00:02:20.812866 | orchestrator | + source_type = "volume" 2026-04-08 00:02:20.812876 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:20.812887 | orchestrator | } 2026-04-08 00:02:20.812897 | orchestrator | 2026-04-08 00:02:20.812908 | orchestrator | + network { 2026-04-08 00:02:20.812918 | orchestrator | + access_network = false 2026-04-08 00:02:20.812928 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-08 00:02:20.812939 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-08 00:02:20.812949 | orchestrator | + mac = (known after apply) 2026-04-08 00:02:20.812960 | orchestrator | + name = (known after apply) 2026-04-08 00:02:20.812970 | orchestrator | + port = (known after apply) 2026-04-08 00:02:20.812981 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:20.812991 | orchestrator | } 2026-04-08 00:02:20.813002 | orchestrator | } 2026-04-08 00:02:20.813012 | orchestrator | 2026-04-08 00:02:20.813028 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-08 00:02:20.813039 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-08 00:02:20.813103 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-08 00:02:20.813115 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-08 00:02:20.813126 | orchestrator | + all_metadata = (known after apply) 2026-04-08 00:02:20.813137 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.813147 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.813158 | orchestrator | + config_drive = true 2026-04-08 00:02:20.813168 | orchestrator | + created = (known after apply) 2026-04-08 00:02:20.813179 | orchestrator | + flavor_id = (known after apply) 2026-04-08 00:02:20.813189 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-08 00:02:20.813200 | orchestrator | + force_delete = false 2026-04-08 00:02:20.813210 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-08 00:02:20.813221 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.813231 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:20.813242 | orchestrator | + image_name = (known after apply) 2026-04-08 00:02:20.813252 | orchestrator | + key_pair = "testbed" 2026-04-08 00:02:20.813263 | orchestrator | + name = "testbed-node-3" 2026-04-08 00:02:20.813273 | orchestrator | + power_state = "active" 2026-04-08 00:02:20.813284 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.813294 | orchestrator | + security_groups = (known after apply) 2026-04-08 00:02:20.813305 | orchestrator | + stop_before_destroy = false 2026-04-08 00:02:20.813315 | orchestrator | + updated = (known after apply) 2026-04-08 00:02:20.813333 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-08 00:02:20.813344 | orchestrator | 2026-04-08 00:02:20.813354 | orchestrator | + block_device { 2026-04-08 00:02:20.813365 | orchestrator | + boot_index = 0 2026-04-08 00:02:20.813376 | orchestrator | + delete_on_termination = false 2026-04-08 00:02:20.813386 | orchestrator | + destination_type = "volume" 2026-04-08 00:02:20.813404 | orchestrator | + multiattach = false 2026-04-08 00:02:20.813415 | orchestrator | + source_type = "volume" 2026-04-08 00:02:20.813425 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:20.813436 | orchestrator | } 2026-04-08 00:02:20.813446 | orchestrator | 2026-04-08 00:02:20.813456 | orchestrator | + network { 2026-04-08 00:02:20.813465 | orchestrator | + access_network = false 2026-04-08 00:02:20.813474 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-08 00:02:20.813484 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-08 00:02:20.813493 | orchestrator | + mac = (known after apply) 2026-04-08 00:02:20.813513 | orchestrator | + name = (known after apply) 2026-04-08 00:02:20.813523 | orchestrator | + port = (known after apply) 2026-04-08 00:02:20.813532 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:20.813542 | orchestrator | } 2026-04-08 00:02:20.813551 | orchestrator | } 2026-04-08 00:02:20.813560 | orchestrator | 2026-04-08 00:02:20.813570 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-08 00:02:20.813580 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-08 00:02:20.813589 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-08 00:02:20.813598 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-08 00:02:20.813608 | orchestrator | + all_metadata = (known after apply) 2026-04-08 00:02:20.813617 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.813627 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.813636 | orchestrator | + config_drive = true 2026-04-08 00:02:20.813645 | orchestrator | + created = (known after apply) 2026-04-08 00:02:20.813654 | orchestrator | + flavor_id = (known after apply) 2026-04-08 00:02:20.813664 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-08 00:02:20.813673 | orchestrator | + force_delete = false 2026-04-08 00:02:20.813682 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-08 00:02:20.813692 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.813701 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:20.813710 | orchestrator | + image_name = (known after apply) 2026-04-08 00:02:20.813720 | orchestrator | + key_pair = "testbed" 2026-04-08 00:02:20.813729 | orchestrator | + name = "testbed-node-4" 2026-04-08 00:02:20.813739 | orchestrator | + power_state = "active" 2026-04-08 00:02:20.813748 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.813757 | orchestrator | + security_groups = (known after apply) 2026-04-08 00:02:20.813766 | orchestrator | + stop_before_destroy = false 2026-04-08 00:02:20.813776 | orchestrator | + updated = (known after apply) 2026-04-08 00:02:20.813785 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-08 00:02:20.813795 | orchestrator | 2026-04-08 00:02:20.813804 | orchestrator | + block_device { 2026-04-08 00:02:20.813813 | orchestrator | + boot_index = 0 2026-04-08 00:02:20.813823 | orchestrator | + delete_on_termination = false 2026-04-08 00:02:20.813832 | orchestrator | + destination_type = "volume" 2026-04-08 00:02:20.813841 | orchestrator | + multiattach = false 2026-04-08 00:02:20.813850 | orchestrator | + source_type = "volume" 2026-04-08 00:02:20.813860 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:20.813869 | orchestrator | } 2026-04-08 00:02:20.813878 | orchestrator | 2026-04-08 00:02:20.813888 | orchestrator | + network { 2026-04-08 00:02:20.813897 | orchestrator | + access_network = false 2026-04-08 00:02:20.813906 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-08 00:02:20.813916 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-08 00:02:20.813925 | orchestrator | + mac = (known after apply) 2026-04-08 00:02:20.813935 | orchestrator | + name = (known after apply) 2026-04-08 00:02:20.813944 | orchestrator | + port = (known after apply) 2026-04-08 00:02:20.813953 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:20.813962 | orchestrator | } 2026-04-08 00:02:20.813972 | orchestrator | } 2026-04-08 00:02:20.813987 | orchestrator | 2026-04-08 00:02:20.813996 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-08 00:02:20.814006 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-08 00:02:20.814041 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-08 00:02:20.814071 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-08 00:02:20.814081 | orchestrator | + all_metadata = (known after apply) 2026-04-08 00:02:20.814091 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.814100 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:20.814110 | orchestrator | + config_drive = true 2026-04-08 00:02:20.814119 | orchestrator | + created = (known after apply) 2026-04-08 00:02:20.814129 | orchestrator | + flavor_id = (known after apply) 2026-04-08 00:02:20.814139 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-08 00:02:20.814148 | orchestrator | + force_delete = false 2026-04-08 00:02:20.814157 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-08 00:02:20.814167 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.814176 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:20.814186 | orchestrator | + image_name = (known after apply) 2026-04-08 00:02:20.814195 | orchestrator | + key_pair = "testbed" 2026-04-08 00:02:20.814204 | orchestrator | + name = "testbed-node-5" 2026-04-08 00:02:20.814214 | orchestrator | + power_state = "active" 2026-04-08 00:02:20.814223 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.814232 | orchestrator | + security_groups = (known after apply) 2026-04-08 00:02:20.814242 | orchestrator | + stop_before_destroy = false 2026-04-08 00:02:20.814251 | orchestrator | + updated = (known after apply) 2026-04-08 00:02:20.814261 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-08 00:02:20.814271 | orchestrator | 2026-04-08 00:02:20.814280 | orchestrator | + block_device { 2026-04-08 00:02:20.814290 | orchestrator | + boot_index = 0 2026-04-08 00:02:20.814299 | orchestrator | + delete_on_termination = false 2026-04-08 00:02:20.814309 | orchestrator | + destination_type = "volume" 2026-04-08 00:02:20.814318 | orchestrator | + multiattach = false 2026-04-08 00:02:20.814327 | orchestrator | + source_type = "volume" 2026-04-08 00:02:20.814337 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:20.814346 | orchestrator | } 2026-04-08 00:02:20.814356 | orchestrator | 2026-04-08 00:02:20.814365 | orchestrator | + network { 2026-04-08 00:02:20.814375 | orchestrator | + access_network = false 2026-04-08 00:02:20.814395 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-08 00:02:20.814405 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-08 00:02:20.814414 | orchestrator | + mac = (known after apply) 2026-04-08 00:02:20.814424 | orchestrator | + name = (known after apply) 2026-04-08 00:02:20.814433 | orchestrator | + port = (known after apply) 2026-04-08 00:02:20.814443 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:20.814452 | orchestrator | } 2026-04-08 00:02:20.814462 | orchestrator | } 2026-04-08 00:02:20.814471 | orchestrator | 2026-04-08 00:02:20.814480 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-08 00:02:20.814490 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-08 00:02:20.814499 | orchestrator | + fingerprint = (known after apply) 2026-04-08 00:02:20.814508 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.814518 | orchestrator | + name = "testbed" 2026-04-08 00:02:20.814527 | orchestrator | + private_key = (sensitive value) 2026-04-08 00:02:20.814536 | orchestrator | + public_key = (known after apply) 2026-04-08 00:02:20.814546 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.814555 | orchestrator | + user_id = (known after apply) 2026-04-08 00:02:20.814564 | orchestrator | } 2026-04-08 00:02:20.814573 | orchestrator | 2026-04-08 00:02:20.814583 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-08 00:02:20.814592 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:20.814609 | orchestrator | + device = (known after apply) 2026-04-08 00:02:20.814618 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.814628 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:20.814637 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.814651 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:20.814661 | orchestrator | } 2026-04-08 00:02:20.814670 | orchestrator | 2026-04-08 00:02:20.814680 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-08 00:02:20.814690 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:20.814699 | orchestrator | + device = (known after apply) 2026-04-08 00:02:20.814708 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.814718 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:20.814727 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.814736 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:20.814745 | orchestrator | } 2026-04-08 00:02:20.814755 | orchestrator | 2026-04-08 00:02:20.814764 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-08 00:02:20.814774 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:20.814783 | orchestrator | + device = (known after apply) 2026-04-08 00:02:20.814792 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.814802 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:20.814811 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.814820 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:20.814829 | orchestrator | } 2026-04-08 00:02:20.814839 | orchestrator | 2026-04-08 00:02:20.814848 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-08 00:02:20.814858 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:20.814867 | orchestrator | + device = (known after apply) 2026-04-08 00:02:20.814877 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.814886 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:20.814896 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.814905 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:20.814914 | orchestrator | } 2026-04-08 00:02:20.814923 | orchestrator | 2026-04-08 00:02:20.814933 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-08 00:02:20.814942 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:20.814952 | orchestrator | + device = (known after apply) 2026-04-08 00:02:20.814961 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.814970 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:20.814980 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.814989 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:20.814998 | orchestrator | } 2026-04-08 00:02:20.815007 | orchestrator | 2026-04-08 00:02:20.815017 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-08 00:02:20.815026 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:20.815036 | orchestrator | + device = (known after apply) 2026-04-08 00:02:20.815045 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.815070 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:20.815080 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.815090 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:20.815099 | orchestrator | } 2026-04-08 00:02:20.815108 | orchestrator | 2026-04-08 00:02:20.815118 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-08 00:02:20.815128 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:20.815137 | orchestrator | + device = (known after apply) 2026-04-08 00:02:20.815147 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.815156 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:20.815166 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.815181 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:20.815190 | orchestrator | } 2026-04-08 00:02:20.815199 | orchestrator | 2026-04-08 00:02:20.815209 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-08 00:02:20.815218 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:20.815228 | orchestrator | + device = (known after apply) 2026-04-08 00:02:20.815237 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.815247 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:20.815256 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.815266 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:20.815275 | orchestrator | } 2026-04-08 00:02:20.815284 | orchestrator | 2026-04-08 00:02:20.815294 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-08 00:02:20.815303 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:20.815313 | orchestrator | + device = (known after apply) 2026-04-08 00:02:20.815322 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.815332 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:20.815341 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.815350 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:20.815360 | orchestrator | } 2026-04-08 00:02:20.815369 | orchestrator | 2026-04-08 00:02:20.815386 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-08 00:02:20.815397 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-08 00:02:20.815407 | orchestrator | + fixed_ip = (known after apply) 2026-04-08 00:02:20.815416 | orchestrator | + floating_ip = (known after apply) 2026-04-08 00:02:20.815425 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.815434 | orchestrator | + port_id = (known after apply) 2026-04-08 00:02:20.815444 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.815453 | orchestrator | } 2026-04-08 00:02:20.815463 | orchestrator | 2026-04-08 00:02:20.815472 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-08 00:02:20.815482 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-08 00:02:20.815491 | orchestrator | + address = (known after apply) 2026-04-08 00:02:20.815500 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.815514 | orchestrator | + dns_domain = (known after apply) 2026-04-08 00:02:20.815524 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:20.815533 | orchestrator | + fixed_ip = (known after apply) 2026-04-08 00:02:20.815543 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.815552 | orchestrator | + pool = "public" 2026-04-08 00:02:20.815561 | orchestrator | + port_id = (known after apply) 2026-04-08 00:02:20.815571 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.815580 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:20.815590 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.815599 | orchestrator | } 2026-04-08 00:02:20.815608 | orchestrator | 2026-04-08 00:02:20.815618 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-08 00:02:20.815627 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-08 00:02:20.815637 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:20.815646 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.815655 | orchestrator | + availability_zone_hints = [ 2026-04-08 00:02:20.815665 | orchestrator | + "nova", 2026-04-08 00:02:20.815674 | orchestrator | ] 2026-04-08 00:02:20.815683 | orchestrator | + dns_domain = (known after apply) 2026-04-08 00:02:20.815693 | orchestrator | + external = (known after apply) 2026-04-08 00:02:20.815703 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.815712 | orchestrator | + mtu = (known after apply) 2026-04-08 00:02:20.815721 | orchestrator | + name = "net-testbed-management" 2026-04-08 00:02:20.815731 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:20.815746 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:20.815755 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.815765 | orchestrator | + shared = (known after apply) 2026-04-08 00:02:20.815774 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.815783 | orchestrator | + transparent_vlan = (known after apply) 2026-04-08 00:02:20.815793 | orchestrator | 2026-04-08 00:02:20.815802 | orchestrator | + segments (known after apply) 2026-04-08 00:02:20.815812 | orchestrator | } 2026-04-08 00:02:20.815821 | orchestrator | 2026-04-08 00:02:20.815830 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-08 00:02:20.815840 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-08 00:02:20.815849 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:20.815859 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-08 00:02:20.815868 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-08 00:02:20.815877 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.815887 | orchestrator | + device_id = (known after apply) 2026-04-08 00:02:20.815896 | orchestrator | + device_owner = (known after apply) 2026-04-08 00:02:20.815905 | orchestrator | + dns_assignment = (known after apply) 2026-04-08 00:02:20.815915 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:20.815924 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.815933 | orchestrator | + mac_address = (known after apply) 2026-04-08 00:02:20.815943 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:20.815952 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:20.815961 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:20.815971 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.815980 | orchestrator | + security_group_ids = (known after apply) 2026-04-08 00:02:20.815989 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.815999 | orchestrator | 2026-04-08 00:02:20.816008 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.816018 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-08 00:02:20.816027 | orchestrator | } 2026-04-08 00:02:20.816037 | orchestrator | 2026-04-08 00:02:20.816046 | orchestrator | + binding (known after apply) 2026-04-08 00:02:20.816098 | orchestrator | 2026-04-08 00:02:20.816108 | orchestrator | + fixed_ip { 2026-04-08 00:02:20.816118 | orchestrator | + ip_address = "192.168.16.5" 2026-04-08 00:02:20.816127 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:20.816137 | orchestrator | } 2026-04-08 00:02:20.816146 | orchestrator | } 2026-04-08 00:02:20.816155 | orchestrator | 2026-04-08 00:02:20.816165 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-08 00:02:20.816174 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-08 00:02:20.816184 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:20.816193 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-08 00:02:20.816203 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-08 00:02:20.816212 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.816222 | orchestrator | + device_id = (known after apply) 2026-04-08 00:02:20.816231 | orchestrator | + device_owner = (known after apply) 2026-04-08 00:02:20.816240 | orchestrator | + dns_assignment = (known after apply) 2026-04-08 00:02:20.816250 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:20.816259 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.816269 | orchestrator | + mac_address = (known after apply) 2026-04-08 00:02:20.816278 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:20.816287 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:20.816297 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:20.816306 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.816327 | orchestrator | + security_group_ids = (known after apply) 2026-04-08 00:02:20.816337 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.816347 | orchestrator | 2026-04-08 00:02:20.816356 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.816365 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-08 00:02:20.816375 | orchestrator | } 2026-04-08 00:02:20.816383 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.816391 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-08 00:02:20.816399 | orchestrator | } 2026-04-08 00:02:20.816406 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.816414 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-08 00:02:20.816422 | orchestrator | } 2026-04-08 00:02:20.816429 | orchestrator | 2026-04-08 00:02:20.816437 | orchestrator | + binding (known after apply) 2026-04-08 00:02:20.816444 | orchestrator | 2026-04-08 00:02:20.816452 | orchestrator | + fixed_ip { 2026-04-08 00:02:20.816460 | orchestrator | + ip_address = "192.168.16.10" 2026-04-08 00:02:20.816467 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:20.816475 | orchestrator | } 2026-04-08 00:02:20.816482 | orchestrator | } 2026-04-08 00:02:20.816490 | orchestrator | 2026-04-08 00:02:20.816497 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-08 00:02:20.816505 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-08 00:02:20.816517 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:20.816525 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-08 00:02:20.816532 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-08 00:02:20.816540 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.816548 | orchestrator | + device_id = (known after apply) 2026-04-08 00:02:20.816555 | orchestrator | + device_owner = (known after apply) 2026-04-08 00:02:20.816563 | orchestrator | + dns_assignment = (known after apply) 2026-04-08 00:02:20.816570 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:20.816578 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.816585 | orchestrator | + mac_address = (known after apply) 2026-04-08 00:02:20.816593 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:20.816601 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:20.816610 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:20.816623 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.816636 | orchestrator | + security_group_ids = (known after apply) 2026-04-08 00:02:20.816648 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.816659 | orchestrator | 2026-04-08 00:02:20.816671 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.816683 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-08 00:02:20.816694 | orchestrator | } 2026-04-08 00:02:20.816704 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.816717 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-08 00:02:20.816729 | orchestrator | } 2026-04-08 00:02:20.816742 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.816754 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-08 00:02:20.816766 | orchestrator | } 2026-04-08 00:02:20.816777 | orchestrator | 2026-04-08 00:02:20.816790 | orchestrator | + binding (known after apply) 2026-04-08 00:02:20.816802 | orchestrator | 2026-04-08 00:02:20.816815 | orchestrator | + fixed_ip { 2026-04-08 00:02:20.816827 | orchestrator | + ip_address = "192.168.16.11" 2026-04-08 00:02:20.816840 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:20.816853 | orchestrator | } 2026-04-08 00:02:20.816863 | orchestrator | } 2026-04-08 00:02:20.816871 | orchestrator | 2026-04-08 00:02:20.816879 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-08 00:02:20.816887 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-08 00:02:20.816894 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:20.816902 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-08 00:02:20.816910 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-08 00:02:20.816918 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.816933 | orchestrator | + device_id = (known after apply) 2026-04-08 00:02:20.816941 | orchestrator | + device_owner = (known after apply) 2026-04-08 00:02:20.816949 | orchestrator | + dns_assignment = (known after apply) 2026-04-08 00:02:20.816957 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:20.816964 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.816972 | orchestrator | + mac_address = (known after apply) 2026-04-08 00:02:20.816979 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:20.816987 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:20.816995 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:20.817002 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.817010 | orchestrator | + security_group_ids = (known after apply) 2026-04-08 00:02:20.817018 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.817025 | orchestrator | 2026-04-08 00:02:20.817033 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.817041 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-08 00:02:20.817065 | orchestrator | } 2026-04-08 00:02:20.817073 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.817081 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-08 00:02:20.817089 | orchestrator | } 2026-04-08 00:02:20.817096 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.817105 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-08 00:02:20.817112 | orchestrator | } 2026-04-08 00:02:20.817120 | orchestrator | 2026-04-08 00:02:20.817128 | orchestrator | + binding (known after apply) 2026-04-08 00:02:20.817136 | orchestrator | 2026-04-08 00:02:20.817143 | orchestrator | + fixed_ip { 2026-04-08 00:02:20.817151 | orchestrator | + ip_address = "192.168.16.12" 2026-04-08 00:02:20.817159 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:20.817167 | orchestrator | } 2026-04-08 00:02:20.817175 | orchestrator | } 2026-04-08 00:02:20.817183 | orchestrator | 2026-04-08 00:02:20.817190 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-08 00:02:20.817198 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-08 00:02:20.817206 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:20.817214 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-08 00:02:20.817222 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-08 00:02:20.817230 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.817238 | orchestrator | + device_id = (known after apply) 2026-04-08 00:02:20.817245 | orchestrator | + device_owner = (known after apply) 2026-04-08 00:02:20.817253 | orchestrator | + dns_assignment = (known after apply) 2026-04-08 00:02:20.817261 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:20.817269 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.817276 | orchestrator | + mac_address = (known after apply) 2026-04-08 00:02:20.817284 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:20.817292 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:20.817306 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:20.817314 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.817322 | orchestrator | + security_group_ids = (known after apply) 2026-04-08 00:02:20.817329 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.817337 | orchestrator | 2026-04-08 00:02:20.817345 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.817353 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-08 00:02:20.817361 | orchestrator | } 2026-04-08 00:02:20.817369 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.817376 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-08 00:02:20.817384 | orchestrator | } 2026-04-08 00:02:20.817392 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.817400 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-08 00:02:20.817407 | orchestrator | } 2026-04-08 00:02:20.817415 | orchestrator | 2026-04-08 00:02:20.817428 | orchestrator | + binding (known after apply) 2026-04-08 00:02:20.817436 | orchestrator | 2026-04-08 00:02:20.817444 | orchestrator | + fixed_ip { 2026-04-08 00:02:20.817451 | orchestrator | + ip_address = "192.168.16.13" 2026-04-08 00:02:20.817459 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:20.817467 | orchestrator | } 2026-04-08 00:02:20.817475 | orchestrator | } 2026-04-08 00:02:20.817482 | orchestrator | 2026-04-08 00:02:20.817490 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-08 00:02:20.817498 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-08 00:02:20.817505 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:20.817513 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-08 00:02:20.817521 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-08 00:02:20.817528 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.817536 | orchestrator | + device_id = (known after apply) 2026-04-08 00:02:20.817544 | orchestrator | + device_owner = (known after apply) 2026-04-08 00:02:20.817551 | orchestrator | + dns_assignment = (known after apply) 2026-04-08 00:02:20.817559 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:20.817571 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.817579 | orchestrator | + mac_address = (known after apply) 2026-04-08 00:02:20.817587 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:20.817595 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:20.817602 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:20.817610 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.817618 | orchestrator | + security_group_ids = (known after apply) 2026-04-08 00:02:20.817626 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.817634 | orchestrator | 2026-04-08 00:02:20.817642 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.817654 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-08 00:02:20.817662 | orchestrator | } 2026-04-08 00:02:20.817670 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.817678 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-08 00:02:20.817685 | orchestrator | } 2026-04-08 00:02:20.817693 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.817701 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-08 00:02:20.817708 | orchestrator | } 2026-04-08 00:02:20.817716 | orchestrator | 2026-04-08 00:02:20.817724 | orchestrator | + binding (known after apply) 2026-04-08 00:02:20.817732 | orchestrator | 2026-04-08 00:02:20.823539 | orchestrator | + fixed_ip { 2026-04-08 00:02:20.823617 | orchestrator | + ip_address = "192.168.16.14" 2026-04-08 00:02:20.823626 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:20.823633 | orchestrator | } 2026-04-08 00:02:20.823640 | orchestrator | } 2026-04-08 00:02:20.823646 | orchestrator | 2026-04-08 00:02:20.823652 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-08 00:02:20.823659 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-08 00:02:20.823666 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:20.823673 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-08 00:02:20.823680 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-08 00:02:20.823686 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.823692 | orchestrator | + device_id = (known after apply) 2026-04-08 00:02:20.823698 | orchestrator | + device_owner = (known after apply) 2026-04-08 00:02:20.823704 | orchestrator | + dns_assignment = (known after apply) 2026-04-08 00:02:20.823710 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:20.823716 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.823722 | orchestrator | + mac_address = (known after apply) 2026-04-08 00:02:20.823728 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:20.823734 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:20.823740 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:20.823768 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.823774 | orchestrator | + security_group_ids = (known after apply) 2026-04-08 00:02:20.823780 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.823786 | orchestrator | 2026-04-08 00:02:20.823792 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.823798 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-08 00:02:20.823804 | orchestrator | } 2026-04-08 00:02:20.823810 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.823817 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-08 00:02:20.823822 | orchestrator | } 2026-04-08 00:02:20.823828 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:20.823835 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-08 00:02:20.823841 | orchestrator | } 2026-04-08 00:02:20.823847 | orchestrator | 2026-04-08 00:02:20.823853 | orchestrator | + binding (known after apply) 2026-04-08 00:02:20.823859 | orchestrator | 2026-04-08 00:02:20.823865 | orchestrator | + fixed_ip { 2026-04-08 00:02:20.823871 | orchestrator | + ip_address = "192.168.16.15" 2026-04-08 00:02:20.823877 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:20.823883 | orchestrator | } 2026-04-08 00:02:20.823889 | orchestrator | } 2026-04-08 00:02:20.823895 | orchestrator | 2026-04-08 00:02:20.823901 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-08 00:02:20.823907 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-08 00:02:20.823913 | orchestrator | + force_destroy = false 2026-04-08 00:02:20.823919 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.823925 | orchestrator | + port_id = (known after apply) 2026-04-08 00:02:20.823931 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.823937 | orchestrator | + router_id = (known after apply) 2026-04-08 00:02:20.823943 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:20.823949 | orchestrator | } 2026-04-08 00:02:20.823955 | orchestrator | 2026-04-08 00:02:20.823961 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-08 00:02:20.823967 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-08 00:02:20.823973 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:20.823979 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.823998 | orchestrator | + availability_zone_hints = [ 2026-04-08 00:02:20.824005 | orchestrator | + "nova", 2026-04-08 00:02:20.824011 | orchestrator | ] 2026-04-08 00:02:20.824017 | orchestrator | + distributed = (known after apply) 2026-04-08 00:02:20.824023 | orchestrator | + enable_snat = (known after apply) 2026-04-08 00:02:20.824029 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-08 00:02:20.824035 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-08 00:02:20.824041 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.824091 | orchestrator | + name = "testbed" 2026-04-08 00:02:20.824100 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.824106 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.824112 | orchestrator | 2026-04-08 00:02:20.824119 | orchestrator | + external_fixed_ip (known after apply) 2026-04-08 00:02:20.824125 | orchestrator | } 2026-04-08 00:02:20.824131 | orchestrator | 2026-04-08 00:02:20.824137 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-08 00:02:20.824144 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-08 00:02:20.824150 | orchestrator | + description = "ssh" 2026-04-08 00:02:20.824156 | orchestrator | + direction = "ingress" 2026-04-08 00:02:20.824162 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:20.824168 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.824174 | orchestrator | + port_range_max = 22 2026-04-08 00:02:20.824180 | orchestrator | + port_range_min = 22 2026-04-08 00:02:20.824186 | orchestrator | + protocol = "tcp" 2026-04-08 00:02:20.824192 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.824204 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:20.824210 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:20.824217 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-08 00:02:20.824223 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:20.824228 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.824235 | orchestrator | } 2026-04-08 00:02:20.824241 | orchestrator | 2026-04-08 00:02:20.824247 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-08 00:02:20.824253 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-08 00:02:20.824259 | orchestrator | + description = "wireguard" 2026-04-08 00:02:20.824293 | orchestrator | + direction = "ingress" 2026-04-08 00:02:20.824300 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:20.824307 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.824313 | orchestrator | + port_range_max = 51820 2026-04-08 00:02:20.824319 | orchestrator | + port_range_min = 51820 2026-04-08 00:02:20.824325 | orchestrator | + protocol = "udp" 2026-04-08 00:02:20.824331 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.824337 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:20.824344 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:20.824350 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-08 00:02:20.824356 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:20.824362 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.824368 | orchestrator | } 2026-04-08 00:02:20.824374 | orchestrator | 2026-04-08 00:02:20.824381 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-08 00:02:20.824387 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-08 00:02:20.824402 | orchestrator | + direction = "ingress" 2026-04-08 00:02:20.824408 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:20.824415 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.824421 | orchestrator | + protocol = "tcp" 2026-04-08 00:02:20.824427 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.824433 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:20.824439 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:20.824445 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-08 00:02:20.824451 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:20.824457 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.824464 | orchestrator | } 2026-04-08 00:02:20.824470 | orchestrator | 2026-04-08 00:02:20.824476 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-08 00:02:20.824482 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-08 00:02:20.824488 | orchestrator | + direction = "ingress" 2026-04-08 00:02:20.824495 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:20.824501 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.824507 | orchestrator | + protocol = "udp" 2026-04-08 00:02:20.824513 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.824519 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:20.824525 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:20.824531 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-08 00:02:20.824537 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:20.824543 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.824549 | orchestrator | } 2026-04-08 00:02:20.824554 | orchestrator | 2026-04-08 00:02:20.824559 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-08 00:02:20.824568 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-08 00:02:20.824574 | orchestrator | + direction = "ingress" 2026-04-08 00:02:20.824579 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:20.824584 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.824590 | orchestrator | + protocol = "icmp" 2026-04-08 00:02:20.824595 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.824600 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:20.824606 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:20.824611 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-08 00:02:20.824624 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:20.824629 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.824635 | orchestrator | } 2026-04-08 00:02:20.824640 | orchestrator | 2026-04-08 00:02:20.824646 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-08 00:02:20.824651 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-08 00:02:20.824656 | orchestrator | + direction = "ingress" 2026-04-08 00:02:20.824662 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:20.824667 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.824672 | orchestrator | + protocol = "tcp" 2026-04-08 00:02:20.824678 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.824683 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:20.824688 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:20.824693 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-08 00:02:20.824699 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:20.824704 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.824709 | orchestrator | } 2026-04-08 00:02:20.824714 | orchestrator | 2026-04-08 00:02:20.824720 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-08 00:02:20.824725 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-08 00:02:20.824730 | orchestrator | + direction = "ingress" 2026-04-08 00:02:20.824736 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:20.824741 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.824746 | orchestrator | + protocol = "udp" 2026-04-08 00:02:20.824751 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.824757 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:20.824762 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:20.824767 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-08 00:02:20.824773 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:20.824778 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.824783 | orchestrator | } 2026-04-08 00:02:20.824789 | orchestrator | 2026-04-08 00:02:20.824794 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-08 00:02:20.824799 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-08 00:02:20.824805 | orchestrator | + direction = "ingress" 2026-04-08 00:02:20.824810 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:20.824815 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.824820 | orchestrator | + protocol = "icmp" 2026-04-08 00:02:20.824825 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.824831 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:20.824836 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:20.824841 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-08 00:02:20.824846 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:20.824852 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.824861 | orchestrator | } 2026-04-08 00:02:20.824866 | orchestrator | 2026-04-08 00:02:20.824871 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-08 00:02:20.824877 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-08 00:02:20.824882 | orchestrator | + description = "vrrp" 2026-04-08 00:02:20.824887 | orchestrator | + direction = "ingress" 2026-04-08 00:02:20.824893 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:20.824898 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.824903 | orchestrator | + protocol = "112" 2026-04-08 00:02:20.824908 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.824914 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:20.824919 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:20.824924 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-08 00:02:20.824930 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:20.824935 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.824940 | orchestrator | } 2026-04-08 00:02:20.824946 | orchestrator | 2026-04-08 00:02:20.824951 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-08 00:02:20.824956 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-08 00:02:20.824962 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.824967 | orchestrator | + description = "management security group" 2026-04-08 00:02:20.824972 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.824977 | orchestrator | + name = "testbed-management" 2026-04-08 00:02:20.824983 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.824988 | orchestrator | + stateful = (known after apply) 2026-04-08 00:02:20.824993 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.824998 | orchestrator | } 2026-04-08 00:02:20.825004 | orchestrator | 2026-04-08 00:02:20.825009 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-08 00:02:20.825014 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-08 00:02:20.825019 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.825025 | orchestrator | + description = "node security group" 2026-04-08 00:02:20.825030 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.825035 | orchestrator | + name = "testbed-node" 2026-04-08 00:02:20.825041 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.825046 | orchestrator | + stateful = (known after apply) 2026-04-08 00:02:20.825061 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.825067 | orchestrator | } 2026-04-08 00:02:20.825072 | orchestrator | 2026-04-08 00:02:20.825077 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-08 00:02:20.825083 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-08 00:02:20.825088 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:20.825093 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-08 00:02:20.825099 | orchestrator | + dns_nameservers = [ 2026-04-08 00:02:20.825104 | orchestrator | + "8.8.8.8", 2026-04-08 00:02:20.825110 | orchestrator | + "9.9.9.9", 2026-04-08 00:02:20.825115 | orchestrator | ] 2026-04-08 00:02:20.825120 | orchestrator | + enable_dhcp = true 2026-04-08 00:02:20.825129 | orchestrator | + gateway_ip = (known after apply) 2026-04-08 00:02:20.825137 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.825143 | orchestrator | + ip_version = 4 2026-04-08 00:02:20.825148 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-08 00:02:20.825153 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-08 00:02:20.825159 | orchestrator | + name = "subnet-testbed-management" 2026-04-08 00:02:20.825164 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:20.825169 | orchestrator | + no_gateway = false 2026-04-08 00:02:20.825174 | orchestrator | + region = (known after apply) 2026-04-08 00:02:20.825180 | orchestrator | + service_types = (known after apply) 2026-04-08 00:02:20.825190 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:20.825195 | orchestrator | 2026-04-08 00:02:20.825200 | orchestrator | + allocation_pool { 2026-04-08 00:02:20.825205 | orchestrator | + end = "192.168.31.250" 2026-04-08 00:02:20.825211 | orchestrator | + start = "192.168.31.200" 2026-04-08 00:02:20.825216 | orchestrator | } 2026-04-08 00:02:20.825221 | orchestrator | } 2026-04-08 00:02:20.825226 | orchestrator | 2026-04-08 00:02:20.825232 | orchestrator | # terraform_data.image will be created 2026-04-08 00:02:20.825237 | orchestrator | + resource "terraform_data" "image" { 2026-04-08 00:02:20.825242 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.825247 | orchestrator | + input = "Ubuntu 24.04" 2026-04-08 00:02:20.825253 | orchestrator | + output = (known after apply) 2026-04-08 00:02:20.825258 | orchestrator | } 2026-04-08 00:02:20.825263 | orchestrator | 2026-04-08 00:02:20.825268 | orchestrator | # terraform_data.image_node will be created 2026-04-08 00:02:20.825273 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-08 00:02:20.825279 | orchestrator | + id = (known after apply) 2026-04-08 00:02:20.825284 | orchestrator | + input = "Ubuntu 24.04" 2026-04-08 00:02:20.825289 | orchestrator | + output = (known after apply) 2026-04-08 00:02:20.825294 | orchestrator | } 2026-04-08 00:02:20.825300 | orchestrator | 2026-04-08 00:02:20.825305 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-08 00:02:20.825310 | orchestrator | 2026-04-08 00:02:20.825315 | orchestrator | Changes to Outputs: 2026-04-08 00:02:20.825321 | orchestrator | + manager_address = (sensitive value) 2026-04-08 00:02:20.825326 | orchestrator | + private_key = (sensitive value) 2026-04-08 00:02:24.604065 | orchestrator | terraform_data.image_node: Creating... 2026-04-08 00:02:24.604562 | orchestrator | terraform_data.image: Creating... 2026-04-08 00:02:24.604723 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=79744aa3-c36d-f53a-27ad-4fc712893a04] 2026-04-08 00:02:24.605577 | orchestrator | terraform_data.image: Creation complete after 0s [id=ee84d3e0-1751-f3bf-e9c4-dbfbe61b9b66] 2026-04-08 00:02:24.618047 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-08 00:02:24.629975 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-08 00:02:24.630126 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-08 00:02:24.631277 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-08 00:02:24.634155 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-08 00:02:24.634204 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-08 00:02:24.635256 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-08 00:02:24.639987 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-08 00:02:24.641355 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-08 00:02:24.643490 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-08 00:02:25.090646 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-08 00:02:25.098925 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-08 00:02:25.128200 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-08 00:02:25.132399 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-08 00:02:25.171745 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-08 00:02:25.176515 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-08 00:02:25.722957 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=bc575c6e-3358-40c5-a097-75e9d74cc636] 2026-04-08 00:02:25.734639 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-08 00:02:28.318395 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=0911be4c-6cd6-4ed2-95f2-3749c0002df5] 2026-04-08 00:02:28.321715 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=a8171b98-d766-41eb-84f8-e0c6f3fec117] 2026-04-08 00:02:28.322857 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-08 00:02:28.325700 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=6d0a5819-af6a-4d5a-b5d8-55d4de9ca567] 2026-04-08 00:02:28.334820 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-08 00:02:28.336247 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-08 00:02:28.337955 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=5c872331-8a67-44e1-93cf-3b447520d047] 2026-04-08 00:02:28.343711 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-08 00:02:28.382792 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=f8a75de5-2ee8-4f26-b825-06a074879466] 2026-04-08 00:02:28.389043 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-08 00:02:28.394762 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=d0f6de66-4fec-4fd7-97e2-1741dd54f232] 2026-04-08 00:02:28.399983 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-08 00:02:28.442216 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=706accd8-4e49-4054-bb21-fde08475a707] 2026-04-08 00:02:28.450527 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=7b23824a-491e-4dc1-9823-22fa2ac48d76] 2026-04-08 00:02:28.461834 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-08 00:02:28.462274 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-08 00:02:28.467983 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=bce0776003afce09fe469530c5961da6f22e51ba] 2026-04-08 00:02:28.471684 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=de859cdfe63511d732bff430aac9a590e392a604] 2026-04-08 00:02:28.472126 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=bf03eb4f-be44-4071-9b80-940b5dcac70f] 2026-04-08 00:02:28.472980 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-08 00:02:29.124150 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=c9f61651-5e83-48f3-a3a4-502c0dcb4422] 2026-04-08 00:02:29.510331 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=747f485d-29e7-4aba-86c8-165f29f086d6] 2026-04-08 00:02:29.510425 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-08 00:02:31.745019 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=41c9e370-cce9-4a92-aa7a-13c8738045eb] 2026-04-08 00:02:31.767628 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=5a9b4992-de90-4207-841b-10d280749dda] 2026-04-08 00:02:31.795739 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=1a79b7c9-a563-4433-b8f2-12de991d52c1] 2026-04-08 00:02:31.827915 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=b329422d-da47-45a8-ac99-562cc2d58717] 2026-04-08 00:02:31.854946 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=5de6439c-8009-46ad-8736-37ced6604b2d] 2026-04-08 00:02:31.872802 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=6dc1d032-8cd5-4ab4-b457-5c11f59554f4] 2026-04-08 00:02:33.194749 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=6c1624f9-e8ce-4ea9-9a5b-0a6ff4ef3038] 2026-04-08 00:02:33.201246 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-08 00:02:33.202185 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-08 00:02:33.202468 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-08 00:02:33.410956 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=674a8bb5-fc6b-4868-a92f-937105c03c73] 2026-04-08 00:02:33.420822 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-08 00:02:33.423215 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-08 00:02:33.424589 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-08 00:02:33.425288 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-08 00:02:33.433032 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-08 00:02:33.436555 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-08 00:02:33.437419 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=d6726f75-da59-4e63-b57e-30655ac5983a] 2026-04-08 00:02:33.437624 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-08 00:02:33.438433 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-08 00:02:33.443925 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-08 00:02:33.752808 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=dcf0452c-445c-471a-af26-cc15c299553d] 2026-04-08 00:02:33.764367 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-08 00:02:34.087653 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=d40b4353-646a-46f1-856c-e2d907215500] 2026-04-08 00:02:34.094288 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-08 00:02:34.115240 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=e8e4eec1-1d36-4f6d-ac35-d2628901f62f] 2026-04-08 00:02:34.120814 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-08 00:02:34.206870 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=f3456665-6351-4329-b146-348d83e0bd20] 2026-04-08 00:02:34.213427 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-08 00:02:34.310912 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=c0c2a58a-ee1b-4110-8e1f-6555d89b2e43] 2026-04-08 00:02:34.324836 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-08 00:02:34.332340 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=c12b9b5c-2258-4265-87b9-462868fb98bd] 2026-04-08 00:02:34.338418 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-08 00:02:34.342181 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=21b0622c-60e1-4f72-bade-4bd879236f53] 2026-04-08 00:02:34.350523 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-08 00:02:34.485147 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=fb8fa514-5f9b-4dc5-b8d4-c5a64507a454] 2026-04-08 00:02:34.560825 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=94a50734-793d-4214-9e5f-65bb8575fc08] 2026-04-08 00:02:34.566432 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=bec27311-8567-4faf-9986-7c89c9d7c5b8] 2026-04-08 00:02:34.738517 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=f49a131b-6277-4bb6-a9aa-3fedf14c2e33] 2026-04-08 00:02:34.829386 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=f957ccf0-7e46-404a-81cf-99702d4bbf85] 2026-04-08 00:02:35.156828 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=ff14727d-c365-4db2-b7ef-f3026036cf06] 2026-04-08 00:02:35.269667 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=361b4f0f-7b76-4e10-9692-584fc9a72f49] 2026-04-08 00:02:35.323127 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=3071215a-c9de-4e53-a257-05b6f09e2900] 2026-04-08 00:02:35.560790 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=41bb05a8-b0b5-4a26-8a78-2e58b57dddb9] 2026-04-08 00:02:37.182239 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=37e84842-2435-4530-a26e-308e0add3d7d] 2026-04-08 00:02:37.217515 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-08 00:02:37.217700 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-08 00:02:37.222002 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-08 00:02:37.233768 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-08 00:02:37.234623 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-08 00:02:37.236632 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-08 00:02:37.241007 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-08 00:02:39.536863 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=0a35e490-bb58-4bfd-9f32-ea8985a9111a] 2026-04-08 00:02:39.545585 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-08 00:02:39.551112 | orchestrator | local_file.inventory: Creating... 2026-04-08 00:02:39.552221 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-08 00:02:39.554454 | orchestrator | local_file.inventory: Creation complete after 0s [id=8d18318a0291d0a948e1db9ba845ffe31a440cb1] 2026-04-08 00:02:39.559630 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=fb0e81cb240e981afb71035198e2b0092703d09d] 2026-04-08 00:02:41.300463 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=0a35e490-bb58-4bfd-9f32-ea8985a9111a] 2026-04-08 00:02:47.220866 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-08 00:02:47.226121 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-08 00:02:47.240511 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-08 00:02:47.242612 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-08 00:02:47.242674 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-08 00:02:47.242695 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-08 00:02:57.230430 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-08 00:02:57.230541 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-08 00:02:57.240692 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-08 00:02:57.242923 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-08 00:02:57.242981 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-08 00:02:57.242992 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-08 00:03:07.238346 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-08 00:03:07.238450 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-08 00:03:07.241577 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-08 00:03:07.243736 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-08 00:03:07.243794 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-08 00:03:07.243801 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-08 00:03:17.247138 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-04-08 00:03:17.247227 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-04-08 00:03:17.247244 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-04-08 00:03:17.247251 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-04-08 00:03:17.247260 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-04-08 00:03:17.247316 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-04-08 00:03:17.981821 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=1bdbd0ce-8c2b-408a-96a5-c6f9fabb72e9] 2026-04-08 00:03:18.107828 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=fe7940d2-7c44-4418-87b2-fe09415f1ac0] 2026-04-08 00:03:18.415729 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=61aee7ab-acf4-40b9-8316-4eb040ac281f] 2026-04-08 00:03:18.540199 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 42s [id=174df7be-5c8a-45cd-a09f-6dfcc3222a26] 2026-04-08 00:03:27.254461 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-04-08 00:03:27.254637 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-04-08 00:03:28.622704 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 52s [id=70b0496a-9954-4ba4-9baf-8eaf13b08029] 2026-04-08 00:03:28.771027 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 52s [id=115d7240-52b0-4240-8323-f4592c203b8b] 2026-04-08 00:03:28.794517 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-08 00:03:28.806427 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=3022835515644412564] 2026-04-08 00:03:28.818828 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-08 00:03:28.826898 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-08 00:03:28.829701 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-08 00:03:28.834799 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-08 00:03:28.863165 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-08 00:03:28.868609 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-08 00:03:28.874638 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-08 00:03:28.874808 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-08 00:03:28.881867 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-08 00:03:28.886596 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-08 00:03:32.247323 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=1bdbd0ce-8c2b-408a-96a5-c6f9fabb72e9/0911be4c-6cd6-4ed2-95f2-3749c0002df5] 2026-04-08 00:03:32.346007 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=fe7940d2-7c44-4418-87b2-fe09415f1ac0/5c872331-8a67-44e1-93cf-3b447520d047] 2026-04-08 00:03:32.370912 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=174df7be-5c8a-45cd-a09f-6dfcc3222a26/a8171b98-d766-41eb-84f8-e0c6f3fec117] 2026-04-08 00:03:38.373601 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=1bdbd0ce-8c2b-408a-96a5-c6f9fabb72e9/6d0a5819-af6a-4d5a-b5d8-55d4de9ca567] 2026-04-08 00:03:38.472424 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 9s [id=1bdbd0ce-8c2b-408a-96a5-c6f9fabb72e9/bf03eb4f-be44-4071-9b80-940b5dcac70f] 2026-04-08 00:03:38.478507 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=fe7940d2-7c44-4418-87b2-fe09415f1ac0/f8a75de5-2ee8-4f26-b825-06a074879466] 2026-04-08 00:03:38.508694 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=fe7940d2-7c44-4418-87b2-fe09415f1ac0/706accd8-4e49-4054-bb21-fde08475a707] 2026-04-08 00:03:38.516199 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=174df7be-5c8a-45cd-a09f-6dfcc3222a26/7b23824a-491e-4dc1-9823-22fa2ac48d76] 2026-04-08 00:03:38.564436 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=174df7be-5c8a-45cd-a09f-6dfcc3222a26/d0f6de66-4fec-4fd7-97e2-1741dd54f232] 2026-04-08 00:03:38.886650 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-08 00:03:48.887254 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-08 00:03:49.416104 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=f8e76c94-3e3e-47b6-ba8c-449fa5b0e012] 2026-04-08 00:03:49.433636 | orchestrator | 2026-04-08 00:03:49.433731 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-08 00:03:49.433751 | orchestrator | 2026-04-08 00:03:49.433764 | orchestrator | Outputs: 2026-04-08 00:03:49.433777 | orchestrator | 2026-04-08 00:03:49.433802 | orchestrator | manager_address = 2026-04-08 00:03:49.433815 | orchestrator | private_key = 2026-04-08 00:03:49.570559 | orchestrator | ok: Runtime: 0:01:34.897107 2026-04-08 00:03:49.602911 | 2026-04-08 00:03:49.603052 | TASK [Create infrastructure (stable)] 2026-04-08 00:03:50.138036 | orchestrator | skipping: Conditional result was False 2026-04-08 00:03:50.156368 | 2026-04-08 00:03:50.156555 | TASK [Fetch manager address] 2026-04-08 00:03:50.654455 | orchestrator | ok 2026-04-08 00:03:50.663632 | 2026-04-08 00:03:50.663933 | TASK [Set manager_host address] 2026-04-08 00:03:50.727578 | orchestrator | ok 2026-04-08 00:03:50.734748 | 2026-04-08 00:03:50.734917 | LOOP [Update ansible collections] 2026-04-08 00:03:51.728584 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-08 00:03:51.729902 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-08 00:03:51.729981 | orchestrator | Starting galaxy collection install process 2026-04-08 00:03:51.730009 | orchestrator | Process install dependency map 2026-04-08 00:03:51.730032 | orchestrator | Starting collection install process 2026-04-08 00:03:51.730053 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2026-04-08 00:03:51.730080 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2026-04-08 00:03:51.730115 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-08 00:03:51.730206 | orchestrator | ok: Item: commons Runtime: 0:00:00.654655 2026-04-08 00:03:52.798338 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-08 00:03:52.798481 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-08 00:03:52.798524 | orchestrator | Starting galaxy collection install process 2026-04-08 00:03:52.798557 | orchestrator | Process install dependency map 2026-04-08 00:03:52.798587 | orchestrator | Starting collection install process 2026-04-08 00:03:52.798614 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2026-04-08 00:03:52.798642 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2026-04-08 00:03:52.798669 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-08 00:03:52.798712 | orchestrator | ok: Item: services Runtime: 0:00:00.690682 2026-04-08 00:03:52.812577 | 2026-04-08 00:03:52.812694 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-08 00:04:03.401451 | orchestrator | ok 2026-04-08 00:04:03.411156 | 2026-04-08 00:04:03.411329 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-08 00:05:03.452878 | orchestrator | ok 2026-04-08 00:05:03.464840 | 2026-04-08 00:05:03.464972 | TASK [Fetch manager ssh hostkey] 2026-04-08 00:05:05.045332 | orchestrator | Output suppressed because no_log was given 2026-04-08 00:05:05.059452 | 2026-04-08 00:05:05.059640 | TASK [Get ssh keypair from terraform environment] 2026-04-08 00:05:05.597287 | orchestrator | ok: Runtime: 0:00:00.009884 2026-04-08 00:05:05.615427 | 2026-04-08 00:05:05.615588 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-08 00:05:05.657958 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-08 00:05:05.668259 | 2026-04-08 00:05:05.668382 | TASK [Run manager part 0] 2026-04-08 00:05:06.568855 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-08 00:05:06.619750 | orchestrator | 2026-04-08 00:05:06.619823 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-08 00:05:06.619835 | orchestrator | 2026-04-08 00:05:06.619855 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-08 00:05:08.367506 | orchestrator | ok: [testbed-manager] 2026-04-08 00:05:08.367576 | orchestrator | 2026-04-08 00:05:08.367603 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-08 00:05:08.367615 | orchestrator | 2026-04-08 00:05:08.367627 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:05:10.343736 | orchestrator | ok: [testbed-manager] 2026-04-08 00:05:10.343801 | orchestrator | 2026-04-08 00:05:10.343809 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-08 00:05:11.021479 | orchestrator | ok: [testbed-manager] 2026-04-08 00:05:11.021529 | orchestrator | 2026-04-08 00:05:11.021538 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-08 00:05:11.063063 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:05:11.063157 | orchestrator | 2026-04-08 00:05:11.063166 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-08 00:05:11.097355 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:05:11.097404 | orchestrator | 2026-04-08 00:05:11.097412 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-08 00:05:11.129943 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:05:11.129990 | orchestrator | 2026-04-08 00:05:11.129996 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-08 00:05:11.848021 | orchestrator | changed: [testbed-manager] 2026-04-08 00:05:11.848129 | orchestrator | 2026-04-08 00:05:11.848138 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-08 00:08:11.285563 | orchestrator | changed: [testbed-manager] 2026-04-08 00:08:11.285636 | orchestrator | 2026-04-08 00:08:11.285653 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-08 00:09:19.257214 | orchestrator | changed: [testbed-manager] 2026-04-08 00:09:19.257427 | orchestrator | 2026-04-08 00:09:19.257446 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-08 00:09:38.073393 | orchestrator | changed: [testbed-manager] 2026-04-08 00:09:38.073465 | orchestrator | 2026-04-08 00:09:38.073476 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-08 00:09:46.240772 | orchestrator | changed: [testbed-manager] 2026-04-08 00:09:46.240906 | orchestrator | 2026-04-08 00:09:46.240918 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-08 00:09:46.287652 | orchestrator | ok: [testbed-manager] 2026-04-08 00:09:46.287771 | orchestrator | 2026-04-08 00:09:46.287792 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-08 00:09:47.063638 | orchestrator | ok: [testbed-manager] 2026-04-08 00:09:47.063684 | orchestrator | 2026-04-08 00:09:47.063693 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-08 00:09:47.796175 | orchestrator | changed: [testbed-manager] 2026-04-08 00:09:47.796241 | orchestrator | 2026-04-08 00:09:47.796252 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-08 00:09:53.974192 | orchestrator | changed: [testbed-manager] 2026-04-08 00:09:53.974281 | orchestrator | 2026-04-08 00:09:53.974297 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-08 00:10:00.205836 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:00.205914 | orchestrator | 2026-04-08 00:10:00.205928 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-08 00:10:02.851336 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:02.851590 | orchestrator | 2026-04-08 00:10:02.851609 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-08 00:10:04.523419 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:04.523529 | orchestrator | 2026-04-08 00:10:04.523559 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-08 00:10:05.607526 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-08 00:10:05.607588 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-08 00:10:05.607595 | orchestrator | 2026-04-08 00:10:05.607604 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-08 00:10:05.651629 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-08 00:10:05.651716 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-08 00:10:05.651732 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-08 00:10:05.651746 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-08 00:10:08.791391 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-08 00:10:08.791455 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-08 00:10:08.791466 | orchestrator | 2026-04-08 00:10:08.791477 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-08 00:10:09.337317 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:09.337365 | orchestrator | 2026-04-08 00:10:09.337373 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-08 00:11:33.539095 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-08 00:11:33.539144 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-08 00:11:33.539152 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-08 00:11:33.539159 | orchestrator | 2026-04-08 00:11:33.539165 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-08 00:11:35.674118 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-08 00:11:35.674220 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-08 00:11:35.674245 | orchestrator | 2026-04-08 00:11:35.674268 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-08 00:11:35.674287 | orchestrator | 2026-04-08 00:11:35.674305 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:11:36.947460 | orchestrator | ok: [testbed-manager] 2026-04-08 00:11:36.947562 | orchestrator | 2026-04-08 00:11:36.947580 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-08 00:11:36.999102 | orchestrator | ok: [testbed-manager] 2026-04-08 00:11:36.999169 | orchestrator | 2026-04-08 00:11:36.999177 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-08 00:11:37.065209 | orchestrator | ok: [testbed-manager] 2026-04-08 00:11:37.065264 | orchestrator | 2026-04-08 00:11:37.065270 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-08 00:11:37.855950 | orchestrator | changed: [testbed-manager] 2026-04-08 00:11:37.856038 | orchestrator | 2026-04-08 00:11:37.856057 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-08 00:11:38.601993 | orchestrator | changed: [testbed-manager] 2026-04-08 00:11:38.602088 | orchestrator | 2026-04-08 00:11:38.602103 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-08 00:11:40.015578 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-08 00:11:40.015672 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-08 00:11:40.015689 | orchestrator | 2026-04-08 00:11:40.015702 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-08 00:11:41.353043 | orchestrator | changed: [testbed-manager] 2026-04-08 00:11:41.353118 | orchestrator | 2026-04-08 00:11:41.353131 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-08 00:11:43.107267 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-08 00:11:43.107360 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-08 00:11:43.107392 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-08 00:11:43.107406 | orchestrator | 2026-04-08 00:11:43.107418 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-08 00:11:43.158288 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:11:43.158325 | orchestrator | 2026-04-08 00:11:43.158330 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-08 00:11:43.226165 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:11:43.226213 | orchestrator | 2026-04-08 00:11:43.226224 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-08 00:11:43.772356 | orchestrator | changed: [testbed-manager] 2026-04-08 00:11:43.772441 | orchestrator | 2026-04-08 00:11:43.772456 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-08 00:11:43.842566 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:11:43.842618 | orchestrator | 2026-04-08 00:11:43.842623 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-08 00:11:44.700751 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-08 00:11:44.700846 | orchestrator | changed: [testbed-manager] 2026-04-08 00:11:44.700864 | orchestrator | 2026-04-08 00:11:44.700879 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-08 00:11:44.735299 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:11:44.735371 | orchestrator | 2026-04-08 00:11:44.735383 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-08 00:11:44.764613 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:11:44.764678 | orchestrator | 2026-04-08 00:11:44.764687 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-08 00:11:44.795253 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:11:44.795395 | orchestrator | 2026-04-08 00:11:44.795412 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-08 00:11:44.873349 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:11:44.873392 | orchestrator | 2026-04-08 00:11:44.873401 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-08 00:11:45.622432 | orchestrator | ok: [testbed-manager] 2026-04-08 00:11:45.622519 | orchestrator | 2026-04-08 00:11:45.622535 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-08 00:11:45.622547 | orchestrator | 2026-04-08 00:11:45.622561 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:11:47.008097 | orchestrator | ok: [testbed-manager] 2026-04-08 00:11:47.008132 | orchestrator | 2026-04-08 00:11:47.008138 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-08 00:11:47.946262 | orchestrator | changed: [testbed-manager] 2026-04-08 00:11:47.946355 | orchestrator | 2026-04-08 00:11:47.946372 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:11:47.946385 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-08 00:11:47.946397 | orchestrator | 2026-04-08 00:11:48.438856 | orchestrator | ok: Runtime: 0:06:42.071466 2026-04-08 00:11:48.450491 | 2026-04-08 00:11:48.450619 | TASK [Point out that the log in on the manager is now possible] 2026-04-08 00:11:48.498886 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-08 00:11:48.511491 | 2026-04-08 00:11:48.511673 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-08 00:11:48.544342 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-08 00:11:48.552321 | 2026-04-08 00:11:48.552447 | TASK [Run manager part 1 + 2] 2026-04-08 00:11:49.442276 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-08 00:11:49.496819 | orchestrator | 2026-04-08 00:11:49.496941 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-08 00:11:49.496961 | orchestrator | 2026-04-08 00:11:49.496990 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:11:52.444349 | orchestrator | ok: [testbed-manager] 2026-04-08 00:11:52.444447 | orchestrator | 2026-04-08 00:11:52.444504 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-08 00:11:52.482440 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:11:52.482519 | orchestrator | 2026-04-08 00:11:52.482536 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-08 00:11:52.526768 | orchestrator | ok: [testbed-manager] 2026-04-08 00:11:52.526868 | orchestrator | 2026-04-08 00:11:52.526923 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-08 00:11:52.563094 | orchestrator | ok: [testbed-manager] 2026-04-08 00:11:52.563178 | orchestrator | 2026-04-08 00:11:52.563194 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-08 00:11:52.622889 | orchestrator | ok: [testbed-manager] 2026-04-08 00:11:52.622950 | orchestrator | 2026-04-08 00:11:52.622960 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-08 00:11:52.689216 | orchestrator | ok: [testbed-manager] 2026-04-08 00:11:52.689301 | orchestrator | 2026-04-08 00:11:52.689319 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-08 00:11:52.732464 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-08 00:11:52.732547 | orchestrator | 2026-04-08 00:11:52.732561 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-08 00:11:53.435787 | orchestrator | ok: [testbed-manager] 2026-04-08 00:11:53.435907 | orchestrator | 2026-04-08 00:11:53.435927 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-08 00:11:53.492834 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:11:53.492953 | orchestrator | 2026-04-08 00:11:53.492970 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-08 00:11:54.862627 | orchestrator | changed: [testbed-manager] 2026-04-08 00:11:54.862731 | orchestrator | 2026-04-08 00:11:54.862753 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-08 00:11:55.418420 | orchestrator | ok: [testbed-manager] 2026-04-08 00:11:55.418508 | orchestrator | 2026-04-08 00:11:55.418525 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-08 00:11:56.559448 | orchestrator | changed: [testbed-manager] 2026-04-08 00:11:56.559535 | orchestrator | 2026-04-08 00:11:56.559554 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-08 00:12:11.176098 | orchestrator | changed: [testbed-manager] 2026-04-08 00:12:11.176186 | orchestrator | 2026-04-08 00:12:11.176201 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-08 00:12:11.842503 | orchestrator | ok: [testbed-manager] 2026-04-08 00:12:11.842546 | orchestrator | 2026-04-08 00:12:11.842557 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-08 00:12:11.900871 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:12:11.900955 | orchestrator | 2026-04-08 00:12:11.900969 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-08 00:12:12.887440 | orchestrator | changed: [testbed-manager] 2026-04-08 00:12:12.887496 | orchestrator | 2026-04-08 00:12:12.887510 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-08 00:12:13.857795 | orchestrator | changed: [testbed-manager] 2026-04-08 00:12:13.857922 | orchestrator | 2026-04-08 00:12:13.857948 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-08 00:12:14.434372 | orchestrator | changed: [testbed-manager] 2026-04-08 00:12:14.434420 | orchestrator | 2026-04-08 00:12:14.434428 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-08 00:12:14.478316 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-08 00:12:14.478438 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-08 00:12:14.478455 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-08 00:12:14.478467 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-08 00:12:16.532547 | orchestrator | changed: [testbed-manager] 2026-04-08 00:12:16.532625 | orchestrator | 2026-04-08 00:12:16.532643 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-08 00:12:26.370192 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-08 00:12:26.370290 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-08 00:12:26.370310 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-08 00:12:26.370324 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-08 00:12:26.370344 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-08 00:12:26.370356 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-08 00:12:26.370367 | orchestrator | 2026-04-08 00:12:26.370379 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-08 00:12:27.406004 | orchestrator | changed: [testbed-manager] 2026-04-08 00:12:27.406124 | orchestrator | 2026-04-08 00:12:27.406140 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-08 00:12:30.524466 | orchestrator | changed: [testbed-manager] 2026-04-08 00:12:30.524667 | orchestrator | 2026-04-08 00:12:30.524681 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-08 00:12:30.570254 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:12:30.570318 | orchestrator | 2026-04-08 00:12:30.570332 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-08 00:14:07.917344 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:07.917445 | orchestrator | 2026-04-08 00:14:07.917462 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-08 00:14:09.058170 | orchestrator | ok: [testbed-manager] 2026-04-08 00:14:09.058270 | orchestrator | 2026-04-08 00:14:09.058289 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:14:09.058303 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-08 00:14:09.058314 | orchestrator | 2026-04-08 00:14:09.210699 | orchestrator | ok: Runtime: 0:02:20.268416 2026-04-08 00:14:09.225876 | 2026-04-08 00:14:09.226204 | TASK [Reboot manager] 2026-04-08 00:14:11.792572 | orchestrator | ok: Runtime: 0:00:01.872119 2026-04-08 00:14:11.810124 | 2026-04-08 00:14:11.810318 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-08 00:14:26.616527 | orchestrator | ok 2026-04-08 00:14:26.627624 | 2026-04-08 00:14:26.627749 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-08 00:15:26.678299 | orchestrator | ok 2026-04-08 00:15:26.688554 | 2026-04-08 00:15:26.688680 | TASK [Deploy manager + bootstrap nodes] 2026-04-08 00:15:29.238851 | orchestrator | 2026-04-08 00:15:29.239038 | orchestrator | # DEPLOY MANAGER 2026-04-08 00:15:29.239064 | orchestrator | 2026-04-08 00:15:29.239079 | orchestrator | + set -e 2026-04-08 00:15:29.239093 | orchestrator | + echo 2026-04-08 00:15:29.239107 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-08 00:15:29.239125 | orchestrator | + echo 2026-04-08 00:15:29.239176 | orchestrator | + cat /opt/manager-vars.sh 2026-04-08 00:15:29.242971 | orchestrator | export NUMBER_OF_NODES=6 2026-04-08 00:15:29.243022 | orchestrator | 2026-04-08 00:15:29.243035 | orchestrator | export CEPH_VERSION=reef 2026-04-08 00:15:29.243048 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-08 00:15:29.243061 | orchestrator | export MANAGER_VERSION=latest 2026-04-08 00:15:29.243085 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-08 00:15:29.243096 | orchestrator | 2026-04-08 00:15:29.243115 | orchestrator | export ARA=false 2026-04-08 00:15:29.243126 | orchestrator | export DEPLOY_MODE=manager 2026-04-08 00:15:29.243144 | orchestrator | export TEMPEST=true 2026-04-08 00:15:29.243155 | orchestrator | export IS_ZUUL=true 2026-04-08 00:15:29.243166 | orchestrator | 2026-04-08 00:15:29.243184 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.187 2026-04-08 00:15:29.243195 | orchestrator | export EXTERNAL_API=false 2026-04-08 00:15:29.243206 | orchestrator | 2026-04-08 00:15:29.243217 | orchestrator | export IMAGE_USER=ubuntu 2026-04-08 00:15:29.243231 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-08 00:15:29.243242 | orchestrator | 2026-04-08 00:15:29.243253 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-08 00:15:29.243269 | orchestrator | 2026-04-08 00:15:29.243280 | orchestrator | + echo 2026-04-08 00:15:29.243297 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-08 00:15:29.243903 | orchestrator | ++ export INTERACTIVE=false 2026-04-08 00:15:29.243923 | orchestrator | ++ INTERACTIVE=false 2026-04-08 00:15:29.243935 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-08 00:15:29.243982 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-08 00:15:29.244078 | orchestrator | + source /opt/manager-vars.sh 2026-04-08 00:15:29.244106 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-08 00:15:29.244118 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-08 00:15:29.244148 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-08 00:15:29.244160 | orchestrator | ++ CEPH_VERSION=reef 2026-04-08 00:15:29.244184 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-08 00:15:29.244195 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-08 00:15:29.244206 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-08 00:15:29.244217 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-08 00:15:29.244228 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-08 00:15:29.244247 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-08 00:15:29.244263 | orchestrator | ++ export ARA=false 2026-04-08 00:15:29.244275 | orchestrator | ++ ARA=false 2026-04-08 00:15:29.244298 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-08 00:15:29.244309 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-08 00:15:29.244320 | orchestrator | ++ export TEMPEST=true 2026-04-08 00:15:29.244331 | orchestrator | ++ TEMPEST=true 2026-04-08 00:15:29.244342 | orchestrator | ++ export IS_ZUUL=true 2026-04-08 00:15:29.244353 | orchestrator | ++ IS_ZUUL=true 2026-04-08 00:15:29.244388 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.187 2026-04-08 00:15:29.244400 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.187 2026-04-08 00:15:29.244410 | orchestrator | ++ export EXTERNAL_API=false 2026-04-08 00:15:29.244421 | orchestrator | ++ EXTERNAL_API=false 2026-04-08 00:15:29.244432 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-08 00:15:29.244447 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-08 00:15:29.244458 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-08 00:15:29.244469 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-08 00:15:29.244480 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-08 00:15:29.244491 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-08 00:15:29.244502 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-08 00:15:29.300441 | orchestrator | + docker version 2026-04-08 00:15:29.403029 | orchestrator | Client: Docker Engine - Community 2026-04-08 00:15:29.403129 | orchestrator | Version: 27.5.1 2026-04-08 00:15:29.403144 | orchestrator | API version: 1.47 2026-04-08 00:15:29.403157 | orchestrator | Go version: go1.22.11 2026-04-08 00:15:29.403167 | orchestrator | Git commit: 9f9e405 2026-04-08 00:15:29.403177 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-08 00:15:29.403189 | orchestrator | OS/Arch: linux/amd64 2026-04-08 00:15:29.403199 | orchestrator | Context: default 2026-04-08 00:15:29.403209 | orchestrator | 2026-04-08 00:15:29.403219 | orchestrator | Server: Docker Engine - Community 2026-04-08 00:15:29.403229 | orchestrator | Engine: 2026-04-08 00:15:29.403239 | orchestrator | Version: 27.5.1 2026-04-08 00:15:29.403250 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-08 00:15:29.403289 | orchestrator | Go version: go1.22.11 2026-04-08 00:15:29.403300 | orchestrator | Git commit: 4c9b3b0 2026-04-08 00:15:29.403310 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-08 00:15:29.403320 | orchestrator | OS/Arch: linux/amd64 2026-04-08 00:15:29.403330 | orchestrator | Experimental: false 2026-04-08 00:15:29.403340 | orchestrator | containerd: 2026-04-08 00:15:29.403350 | orchestrator | Version: v2.2.2 2026-04-08 00:15:29.403404 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-08 00:15:29.403421 | orchestrator | runc: 2026-04-08 00:15:29.403437 | orchestrator | Version: 1.3.4 2026-04-08 00:15:29.403454 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-08 00:15:29.403471 | orchestrator | docker-init: 2026-04-08 00:15:29.403485 | orchestrator | Version: 0.19.0 2026-04-08 00:15:29.403496 | orchestrator | GitCommit: de40ad0 2026-04-08 00:15:29.405999 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-08 00:15:29.416759 | orchestrator | + set -e 2026-04-08 00:15:29.416858 | orchestrator | + source /opt/manager-vars.sh 2026-04-08 00:15:29.416881 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-08 00:15:29.416903 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-08 00:15:29.416923 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-08 00:15:29.416942 | orchestrator | ++ CEPH_VERSION=reef 2026-04-08 00:15:29.416963 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-08 00:15:29.416985 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-08 00:15:29.417004 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-08 00:15:29.417023 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-08 00:15:29.417043 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-08 00:15:29.417062 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-08 00:15:29.417074 | orchestrator | ++ export ARA=false 2026-04-08 00:15:29.417086 | orchestrator | ++ ARA=false 2026-04-08 00:15:29.417108 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-08 00:15:29.417121 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-08 00:15:29.417132 | orchestrator | ++ export TEMPEST=true 2026-04-08 00:15:29.417143 | orchestrator | ++ TEMPEST=true 2026-04-08 00:15:29.417154 | orchestrator | ++ export IS_ZUUL=true 2026-04-08 00:15:29.417164 | orchestrator | ++ IS_ZUUL=true 2026-04-08 00:15:29.417176 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.187 2026-04-08 00:15:29.417187 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.187 2026-04-08 00:15:29.417198 | orchestrator | ++ export EXTERNAL_API=false 2026-04-08 00:15:29.417209 | orchestrator | ++ EXTERNAL_API=false 2026-04-08 00:15:29.417219 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-08 00:15:29.417230 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-08 00:15:29.417241 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-08 00:15:29.417252 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-08 00:15:29.417263 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-08 00:15:29.417274 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-08 00:15:29.417286 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-08 00:15:29.417296 | orchestrator | ++ export INTERACTIVE=false 2026-04-08 00:15:29.417307 | orchestrator | ++ INTERACTIVE=false 2026-04-08 00:15:29.417318 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-08 00:15:29.417333 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-08 00:15:29.417506 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-08 00:15:29.417607 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-08 00:15:29.417622 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-04-08 00:15:29.425092 | orchestrator | + set -e 2026-04-08 00:15:29.425171 | orchestrator | + VERSION=reef 2026-04-08 00:15:29.426413 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-08 00:15:29.432113 | orchestrator | + [[ -n ceph_version: reef ]] 2026-04-08 00:15:29.432161 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-04-08 00:15:29.437885 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-04-08 00:15:29.907924 | orchestrator | + set -e 2026-04-08 00:15:29.907999 | orchestrator | + VERSION=2024.2 2026-04-08 00:15:29.908014 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-08 00:15:29.908028 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-04-08 00:15:29.908043 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-04-08 00:15:29.908054 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-08 00:15:29.908067 | orchestrator | ++ semver latest 7.0.0 2026-04-08 00:15:29.908111 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-08 00:15:29.908123 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-08 00:15:29.908134 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-08 00:15:29.908144 | orchestrator | ++ semver latest 10.0.0-0 2026-04-08 00:15:29.908155 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-08 00:15:29.908166 | orchestrator | ++ semver 2024.2 2025.1 2026-04-08 00:15:29.908177 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-08 00:15:29.908188 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-08 00:15:29.908198 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-08 00:15:29.908209 | orchestrator | + source /opt/venv/bin/activate 2026-04-08 00:15:29.908220 | orchestrator | ++ deactivate nondestructive 2026-04-08 00:15:29.908231 | orchestrator | ++ '[' -n '' ']' 2026-04-08 00:15:29.908242 | orchestrator | ++ '[' -n '' ']' 2026-04-08 00:15:29.908253 | orchestrator | ++ hash -r 2026-04-08 00:15:29.908263 | orchestrator | ++ '[' -n '' ']' 2026-04-08 00:15:29.908274 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-08 00:15:29.908285 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-08 00:15:29.908297 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-08 00:15:29.908308 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-08 00:15:29.908318 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-08 00:15:29.908329 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-08 00:15:29.908340 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-08 00:15:29.908351 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-08 00:15:29.908409 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-08 00:15:29.908421 | orchestrator | ++ export PATH 2026-04-08 00:15:29.908432 | orchestrator | ++ '[' -n '' ']' 2026-04-08 00:15:29.908443 | orchestrator | ++ '[' -z '' ']' 2026-04-08 00:15:29.908453 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-08 00:15:29.908464 | orchestrator | ++ PS1='(venv) ' 2026-04-08 00:15:29.908475 | orchestrator | ++ export PS1 2026-04-08 00:15:29.908486 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-08 00:15:29.908497 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-08 00:15:29.908538 | orchestrator | ++ hash -r 2026-04-08 00:15:29.908565 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-08 00:15:31.128645 | orchestrator | 2026-04-08 00:15:31.128750 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-08 00:15:31.128767 | orchestrator | 2026-04-08 00:15:31.128779 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-08 00:15:31.795987 | orchestrator | ok: [testbed-manager] 2026-04-08 00:15:31.796094 | orchestrator | 2026-04-08 00:15:31.796111 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-08 00:15:32.804121 | orchestrator | changed: [testbed-manager] 2026-04-08 00:15:32.804209 | orchestrator | 2026-04-08 00:15:32.804227 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-08 00:15:32.804240 | orchestrator | 2026-04-08 00:15:32.804251 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:15:37.290813 | orchestrator | ok: [testbed-manager] 2026-04-08 00:15:37.290930 | orchestrator | 2026-04-08 00:15:37.290949 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-08 00:15:37.354955 | orchestrator | ok: [testbed-manager] 2026-04-08 00:15:37.355054 | orchestrator | 2026-04-08 00:15:37.355071 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-08 00:15:37.812232 | orchestrator | changed: [testbed-manager] 2026-04-08 00:15:37.812321 | orchestrator | 2026-04-08 00:15:37.812332 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-08 00:15:37.850329 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:15:37.850400 | orchestrator | 2026-04-08 00:15:37.850407 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-08 00:15:38.184170 | orchestrator | changed: [testbed-manager] 2026-04-08 00:15:38.184284 | orchestrator | 2026-04-08 00:15:38.184302 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-08 00:15:38.525457 | orchestrator | ok: [testbed-manager] 2026-04-08 00:15:38.525535 | orchestrator | 2026-04-08 00:15:38.525544 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-08 00:15:38.643205 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:15:38.643302 | orchestrator | 2026-04-08 00:15:38.643317 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-08 00:15:38.643329 | orchestrator | 2026-04-08 00:15:38.643339 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:15:40.389512 | orchestrator | ok: [testbed-manager] 2026-04-08 00:15:40.389632 | orchestrator | 2026-04-08 00:15:40.389662 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-08 00:15:40.489455 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-08 00:15:40.489574 | orchestrator | 2026-04-08 00:15:40.489597 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-08 00:15:40.556559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-08 00:15:40.556673 | orchestrator | 2026-04-08 00:15:40.556699 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-08 00:15:41.702357 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-08 00:15:41.702461 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-08 00:15:41.702477 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-08 00:15:41.702489 | orchestrator | 2026-04-08 00:15:41.702501 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-08 00:15:43.535772 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-08 00:15:43.535885 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-08 00:15:43.535901 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-08 00:15:43.535914 | orchestrator | 2026-04-08 00:15:43.535926 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-08 00:15:44.207777 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-08 00:15:44.207851 | orchestrator | changed: [testbed-manager] 2026-04-08 00:15:44.207861 | orchestrator | 2026-04-08 00:15:44.207867 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-08 00:15:44.877666 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-08 00:15:44.877760 | orchestrator | changed: [testbed-manager] 2026-04-08 00:15:44.877777 | orchestrator | 2026-04-08 00:15:44.877790 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-08 00:15:44.930416 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:15:44.930511 | orchestrator | 2026-04-08 00:15:44.930536 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-08 00:15:45.276323 | orchestrator | ok: [testbed-manager] 2026-04-08 00:15:45.276456 | orchestrator | 2026-04-08 00:15:45.276472 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-08 00:15:45.339730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-08 00:15:45.339819 | orchestrator | 2026-04-08 00:15:45.339834 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-08 00:15:46.480913 | orchestrator | changed: [testbed-manager] 2026-04-08 00:15:46.481014 | orchestrator | 2026-04-08 00:15:46.481031 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-08 00:15:47.325885 | orchestrator | changed: [testbed-manager] 2026-04-08 00:15:47.325978 | orchestrator | 2026-04-08 00:15:47.325997 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-08 00:15:56.380553 | orchestrator | changed: [testbed-manager] 2026-04-08 00:15:56.380679 | orchestrator | 2026-04-08 00:15:56.380716 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-08 00:15:56.423985 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:15:56.424102 | orchestrator | 2026-04-08 00:15:56.424129 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-08 00:15:56.424152 | orchestrator | 2026-04-08 00:15:56.424171 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:15:58.189665 | orchestrator | ok: [testbed-manager] 2026-04-08 00:15:58.189733 | orchestrator | 2026-04-08 00:15:58.189760 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-08 00:15:58.321048 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-08 00:15:58.321149 | orchestrator | 2026-04-08 00:15:58.321164 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-08 00:15:58.377980 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-08 00:15:58.378125 | orchestrator | 2026-04-08 00:15:58.378137 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-08 00:16:00.709550 | orchestrator | ok: [testbed-manager] 2026-04-08 00:16:00.709682 | orchestrator | 2026-04-08 00:16:00.709701 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-08 00:16:00.761473 | orchestrator | ok: [testbed-manager] 2026-04-08 00:16:00.761572 | orchestrator | 2026-04-08 00:16:00.761586 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-08 00:16:00.884459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-08 00:16:00.884554 | orchestrator | 2026-04-08 00:16:00.884571 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-08 00:16:03.669880 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-08 00:16:03.669986 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-08 00:16:03.670001 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-08 00:16:03.670014 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-08 00:16:03.670084 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-08 00:16:03.670096 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-08 00:16:03.670107 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-08 00:16:03.670118 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-08 00:16:03.670130 | orchestrator | 2026-04-08 00:16:03.670142 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-08 00:16:04.272864 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:04.272965 | orchestrator | 2026-04-08 00:16:04.272981 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-08 00:16:04.871734 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:04.871838 | orchestrator | 2026-04-08 00:16:04.871854 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-08 00:16:04.929323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-08 00:16:04.929417 | orchestrator | 2026-04-08 00:16:04.929432 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-08 00:16:06.114448 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-08 00:16:06.114586 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-08 00:16:06.114606 | orchestrator | 2026-04-08 00:16:06.114623 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-08 00:16:06.696134 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:06.696241 | orchestrator | 2026-04-08 00:16:06.696265 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-08 00:16:06.735723 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:16:06.735819 | orchestrator | 2026-04-08 00:16:06.735834 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-08 00:16:06.794556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-08 00:16:06.794674 | orchestrator | 2026-04-08 00:16:06.794697 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-08 00:16:07.378081 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:07.378187 | orchestrator | 2026-04-08 00:16:07.378204 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-08 00:16:07.427224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-08 00:16:07.427433 | orchestrator | 2026-04-08 00:16:07.427454 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-08 00:16:08.754528 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-08 00:16:08.754635 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-08 00:16:08.754649 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:08.754663 | orchestrator | 2026-04-08 00:16:08.754675 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-08 00:16:09.350449 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:09.350559 | orchestrator | 2026-04-08 00:16:09.350575 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-08 00:16:09.399944 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:16:09.400051 | orchestrator | 2026-04-08 00:16:09.400066 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-08 00:16:09.488822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-08 00:16:09.488896 | orchestrator | 2026-04-08 00:16:09.488903 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-08 00:16:09.963973 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:09.964069 | orchestrator | 2026-04-08 00:16:09.964105 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-08 00:16:10.342279 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:10.342408 | orchestrator | 2026-04-08 00:16:10.342423 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-08 00:16:11.568521 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-08 00:16:11.568739 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-08 00:16:11.569560 | orchestrator | 2026-04-08 00:16:11.569591 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-08 00:16:12.178155 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:12.178239 | orchestrator | 2026-04-08 00:16:12.178250 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-08 00:16:12.549063 | orchestrator | ok: [testbed-manager] 2026-04-08 00:16:12.549163 | orchestrator | 2026-04-08 00:16:12.549180 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-08 00:16:12.884536 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:12.884665 | orchestrator | 2026-04-08 00:16:12.884684 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-08 00:16:12.928050 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:16:12.928173 | orchestrator | 2026-04-08 00:16:12.928196 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-08 00:16:12.998528 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-08 00:16:12.998656 | orchestrator | 2026-04-08 00:16:12.998682 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-08 00:16:13.049057 | orchestrator | ok: [testbed-manager] 2026-04-08 00:16:13.049154 | orchestrator | 2026-04-08 00:16:13.049169 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-08 00:16:14.956668 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-08 00:16:14.956781 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-08 00:16:14.956798 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-08 00:16:14.956810 | orchestrator | 2026-04-08 00:16:14.956823 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-08 00:16:15.644582 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:15.644731 | orchestrator | 2026-04-08 00:16:15.644762 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-08 00:16:16.348671 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:16.348754 | orchestrator | 2026-04-08 00:16:16.348766 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-08 00:16:17.079730 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:17.079862 | orchestrator | 2026-04-08 00:16:17.079891 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-08 00:16:17.156634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-08 00:16:17.156734 | orchestrator | 2026-04-08 00:16:17.156748 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-08 00:16:17.201365 | orchestrator | ok: [testbed-manager] 2026-04-08 00:16:17.201457 | orchestrator | 2026-04-08 00:16:17.201471 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-08 00:16:17.896530 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-08 00:16:17.896635 | orchestrator | 2026-04-08 00:16:17.896651 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-08 00:16:17.978321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-08 00:16:17.978447 | orchestrator | 2026-04-08 00:16:17.978476 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-08 00:16:18.717019 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:18.717124 | orchestrator | 2026-04-08 00:16:18.717140 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-08 00:16:19.351679 | orchestrator | ok: [testbed-manager] 2026-04-08 00:16:19.351782 | orchestrator | 2026-04-08 00:16:19.351798 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-08 00:16:19.411839 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:16:19.411932 | orchestrator | 2026-04-08 00:16:19.411946 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-08 00:16:19.472449 | orchestrator | ok: [testbed-manager] 2026-04-08 00:16:19.472553 | orchestrator | 2026-04-08 00:16:19.472569 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-08 00:16:20.265955 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:20.266108 | orchestrator | 2026-04-08 00:16:20.266126 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-08 00:17:32.403974 | orchestrator | changed: [testbed-manager] 2026-04-08 00:17:32.404057 | orchestrator | 2026-04-08 00:17:32.404066 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-08 00:17:33.275924 | orchestrator | ok: [testbed-manager] 2026-04-08 00:17:33.276027 | orchestrator | 2026-04-08 00:17:33.276044 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-08 00:17:33.320126 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:17:33.320255 | orchestrator | 2026-04-08 00:17:33.320270 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-08 00:17:35.813300 | orchestrator | changed: [testbed-manager] 2026-04-08 00:17:35.813397 | orchestrator | 2026-04-08 00:17:35.813419 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-08 00:17:35.892811 | orchestrator | ok: [testbed-manager] 2026-04-08 00:17:35.892937 | orchestrator | 2026-04-08 00:17:35.892978 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-08 00:17:35.892993 | orchestrator | 2026-04-08 00:17:35.893004 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-08 00:17:35.931782 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:17:35.931889 | orchestrator | 2026-04-08 00:17:35.931904 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-08 00:18:35.979457 | orchestrator | Pausing for 60 seconds 2026-04-08 00:18:35.979553 | orchestrator | changed: [testbed-manager] 2026-04-08 00:18:35.979571 | orchestrator | 2026-04-08 00:18:35.979584 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-08 00:18:39.108440 | orchestrator | changed: [testbed-manager] 2026-04-08 00:18:39.108534 | orchestrator | 2026-04-08 00:18:39.108551 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-08 00:19:20.581269 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-08 00:19:20.581371 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-08 00:19:20.581381 | orchestrator | changed: [testbed-manager] 2026-04-08 00:19:20.581410 | orchestrator | 2026-04-08 00:19:20.581417 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-08 00:19:25.758950 | orchestrator | changed: [testbed-manager] 2026-04-08 00:19:25.759935 | orchestrator | 2026-04-08 00:19:25.759988 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-08 00:19:25.840411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-08 00:19:25.840537 | orchestrator | 2026-04-08 00:19:25.840559 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-08 00:19:25.840573 | orchestrator | 2026-04-08 00:19:25.840585 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-08 00:19:25.891462 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:19:25.891531 | orchestrator | 2026-04-08 00:19:25.891538 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-08 00:19:25.958638 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-08 00:19:25.958705 | orchestrator | 2026-04-08 00:19:25.958711 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-08 00:19:26.637831 | orchestrator | changed: [testbed-manager] 2026-04-08 00:19:26.637948 | orchestrator | 2026-04-08 00:19:26.637970 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-08 00:19:29.601531 | orchestrator | ok: [testbed-manager] 2026-04-08 00:19:29.601638 | orchestrator | 2026-04-08 00:19:29.601655 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-08 00:19:29.672631 | orchestrator | ok: [testbed-manager] => { 2026-04-08 00:19:29.672720 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-08 00:19:29.672733 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-08 00:19:29.672743 | orchestrator | "Checking running containers against expected versions...", 2026-04-08 00:19:29.672754 | orchestrator | "", 2026-04-08 00:19:29.672768 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-08 00:19:29.672778 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-08 00:19:29.672788 | orchestrator | " Enabled: true", 2026-04-08 00:19:29.672798 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-08 00:19:29.672807 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:19:29.672817 | orchestrator | "", 2026-04-08 00:19:29.672827 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-08 00:19:29.672837 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-04-08 00:19:29.672847 | orchestrator | " Enabled: true", 2026-04-08 00:19:29.672857 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-04-08 00:19:29.672866 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:19:29.672876 | orchestrator | "", 2026-04-08 00:19:29.672885 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-08 00:19:29.672895 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-08 00:19:29.672904 | orchestrator | " Enabled: true", 2026-04-08 00:19:29.672914 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-08 00:19:29.672923 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:19:29.672933 | orchestrator | "", 2026-04-08 00:19:29.672942 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-08 00:19:29.672952 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-08 00:19:29.672962 | orchestrator | " Enabled: true", 2026-04-08 00:19:29.672972 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-08 00:19:29.672986 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:19:29.673001 | orchestrator | "", 2026-04-08 00:19:29.673017 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-08 00:19:29.673033 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-08 00:19:29.673156 | orchestrator | " Enabled: true", 2026-04-08 00:19:29.673176 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-08 00:19:29.673192 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:19:29.673208 | orchestrator | "", 2026-04-08 00:19:29.673225 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-08 00:19:29.673243 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-08 00:19:29.673260 | orchestrator | " Enabled: true", 2026-04-08 00:19:29.673271 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-08 00:19:29.673281 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:19:29.673290 | orchestrator | "", 2026-04-08 00:19:29.673300 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-08 00:19:29.673309 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-08 00:19:29.673319 | orchestrator | " Enabled: true", 2026-04-08 00:19:29.673329 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-08 00:19:29.673341 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:19:29.673358 | orchestrator | "", 2026-04-08 00:19:29.673375 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-08 00:19:29.673391 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-08 00:19:29.673401 | orchestrator | " Enabled: true", 2026-04-08 00:19:29.673476 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-08 00:19:29.673487 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:19:29.673496 | orchestrator | "", 2026-04-08 00:19:29.673517 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-08 00:19:29.673527 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-04-08 00:19:29.673542 | orchestrator | " Enabled: true", 2026-04-08 00:19:29.673552 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-04-08 00:19:29.673562 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:19:29.673572 | orchestrator | "", 2026-04-08 00:19:29.673582 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-08 00:19:29.673591 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-08 00:19:29.673601 | orchestrator | " Enabled: true", 2026-04-08 00:19:29.673611 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-08 00:19:29.673620 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:19:29.673630 | orchestrator | "", 2026-04-08 00:19:29.673639 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-08 00:19:29.673649 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-08 00:19:29.673659 | orchestrator | " Enabled: true", 2026-04-08 00:19:29.673668 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-08 00:19:29.673678 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:19:29.673688 | orchestrator | "", 2026-04-08 00:19:29.673697 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-08 00:19:29.673707 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-08 00:19:29.673717 | orchestrator | " Enabled: true", 2026-04-08 00:19:29.673726 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-08 00:19:29.673736 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:19:29.673746 | orchestrator | "", 2026-04-08 00:19:29.673755 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-08 00:19:29.673765 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-08 00:19:29.673774 | orchestrator | " Enabled: true", 2026-04-08 00:19:29.673785 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-08 00:19:29.673795 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:19:29.673806 | orchestrator | "", 2026-04-08 00:19:29.673817 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-08 00:19:29.673828 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-08 00:19:29.673839 | orchestrator | " Enabled: true", 2026-04-08 00:19:29.673849 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-08 00:19:29.673872 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:19:29.673883 | orchestrator | "", 2026-04-08 00:19:29.673894 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-08 00:19:29.673925 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-08 00:19:29.673936 | orchestrator | " Enabled: true", 2026-04-08 00:19:29.673947 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-08 00:19:29.673958 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:19:29.673968 | orchestrator | "", 2026-04-08 00:19:29.673979 | orchestrator | "=== Summary ===", 2026-04-08 00:19:29.673990 | orchestrator | "Errors (version mismatches): 0", 2026-04-08 00:19:29.674001 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-08 00:19:29.674012 | orchestrator | "", 2026-04-08 00:19:29.674136 | orchestrator | "✅ All running containers match expected versions!" 2026-04-08 00:19:29.674149 | orchestrator | ] 2026-04-08 00:19:29.674160 | orchestrator | } 2026-04-08 00:19:29.674172 | orchestrator | 2026-04-08 00:19:29.674183 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-08 00:19:29.725628 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:19:29.725731 | orchestrator | 2026-04-08 00:19:29.725749 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:19:29.725763 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-08 00:19:29.725775 | orchestrator | 2026-04-08 00:19:29.826589 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-08 00:19:29.826710 | orchestrator | + deactivate 2026-04-08 00:19:29.826736 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-08 00:19:29.826763 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-08 00:19:29.826776 | orchestrator | + export PATH 2026-04-08 00:19:29.826788 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-08 00:19:29.826800 | orchestrator | + '[' -n '' ']' 2026-04-08 00:19:29.826811 | orchestrator | + hash -r 2026-04-08 00:19:29.826822 | orchestrator | + '[' -n '' ']' 2026-04-08 00:19:29.826834 | orchestrator | + unset VIRTUAL_ENV 2026-04-08 00:19:29.826844 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-08 00:19:29.826856 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-08 00:19:29.826867 | orchestrator | + unset -f deactivate 2026-04-08 00:19:29.826878 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-08 00:19:29.835283 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-08 00:19:29.835355 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-08 00:19:29.835378 | orchestrator | + local max_attempts=60 2026-04-08 00:19:29.835397 | orchestrator | + local name=ceph-ansible 2026-04-08 00:19:29.835409 | orchestrator | + local attempt_num=1 2026-04-08 00:19:29.836639 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:19:29.870339 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:19:29.870424 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-08 00:19:29.870446 | orchestrator | + local max_attempts=60 2026-04-08 00:19:29.870466 | orchestrator | + local name=kolla-ansible 2026-04-08 00:19:29.870486 | orchestrator | + local attempt_num=1 2026-04-08 00:19:29.871396 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-08 00:19:29.907000 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:19:29.907054 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-08 00:19:29.907068 | orchestrator | + local max_attempts=60 2026-04-08 00:19:29.907103 | orchestrator | + local name=osism-ansible 2026-04-08 00:19:29.907115 | orchestrator | + local attempt_num=1 2026-04-08 00:19:29.907950 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-08 00:19:29.943405 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:19:29.943453 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-08 00:19:29.943466 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-08 00:19:30.643136 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-08 00:19:30.804358 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-08 00:19:30.805511 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-04-08 00:19:30.805552 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-04-08 00:19:30.805565 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-04-08 00:19:30.805578 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-04-08 00:19:30.805589 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-04-08 00:19:30.805600 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-04-08 00:19:30.805611 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2026-04-08 00:19:30.805641 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-04-08 00:19:30.805652 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-04-08 00:19:30.805663 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-04-08 00:19:30.805674 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-04-08 00:19:30.805685 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-04-08 00:19:30.805696 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-04-08 00:19:30.805707 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-04-08 00:19:30.805718 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-04-08 00:19:30.810339 | orchestrator | ++ semver latest 7.0.0 2026-04-08 00:19:30.855193 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-08 00:19:30.855313 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-08 00:19:30.855344 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-08 00:19:30.860206 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-08 00:19:43.286509 | orchestrator | 2026-04-08 00:19:43 | INFO  | Prepare task for execution of resolvconf. 2026-04-08 00:19:43.512508 | orchestrator | 2026-04-08 00:19:43 | INFO  | Task e12f95cb-109f-4c3c-82fd-26d29debd80b (resolvconf) was prepared for execution. 2026-04-08 00:19:43.512636 | orchestrator | 2026-04-08 00:19:43 | INFO  | It takes a moment until task e12f95cb-109f-4c3c-82fd-26d29debd80b (resolvconf) has been started and output is visible here. 2026-04-08 00:19:56.596708 | orchestrator | 2026-04-08 00:19:56.596823 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-08 00:19:56.596840 | orchestrator | 2026-04-08 00:19:56.596852 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:19:56.596864 | orchestrator | Wednesday 08 April 2026 00:19:46 +0000 (0:00:00.175) 0:00:00.175 ******* 2026-04-08 00:19:56.596876 | orchestrator | ok: [testbed-manager] 2026-04-08 00:19:56.596888 | orchestrator | 2026-04-08 00:19:56.596899 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-08 00:19:56.596911 | orchestrator | Wednesday 08 April 2026 00:19:51 +0000 (0:00:04.628) 0:00:04.803 ******* 2026-04-08 00:19:56.596922 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:19:56.596933 | orchestrator | 2026-04-08 00:19:56.596944 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-08 00:19:56.596955 | orchestrator | Wednesday 08 April 2026 00:19:51 +0000 (0:00:00.061) 0:00:04.865 ******* 2026-04-08 00:19:56.596967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-08 00:19:56.596979 | orchestrator | 2026-04-08 00:19:56.596990 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-08 00:19:56.597001 | orchestrator | Wednesday 08 April 2026 00:19:51 +0000 (0:00:00.078) 0:00:04.943 ******* 2026-04-08 00:19:56.597023 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-08 00:19:56.597034 | orchestrator | 2026-04-08 00:19:56.597046 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-08 00:19:56.597087 | orchestrator | Wednesday 08 April 2026 00:19:51 +0000 (0:00:00.071) 0:00:05.014 ******* 2026-04-08 00:19:56.597099 | orchestrator | ok: [testbed-manager] 2026-04-08 00:19:56.597110 | orchestrator | 2026-04-08 00:19:56.597121 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-08 00:19:56.597133 | orchestrator | Wednesday 08 April 2026 00:19:52 +0000 (0:00:01.100) 0:00:06.114 ******* 2026-04-08 00:19:56.597145 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:19:56.597156 | orchestrator | 2026-04-08 00:19:56.597167 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-08 00:19:56.597178 | orchestrator | Wednesday 08 April 2026 00:19:52 +0000 (0:00:00.061) 0:00:06.175 ******* 2026-04-08 00:19:56.597189 | orchestrator | ok: [testbed-manager] 2026-04-08 00:19:56.597200 | orchestrator | 2026-04-08 00:19:56.597211 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-08 00:19:56.597222 | orchestrator | Wednesday 08 April 2026 00:19:52 +0000 (0:00:00.461) 0:00:06.637 ******* 2026-04-08 00:19:56.597233 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:19:56.597246 | orchestrator | 2026-04-08 00:19:56.597259 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-08 00:19:56.597273 | orchestrator | Wednesday 08 April 2026 00:19:52 +0000 (0:00:00.067) 0:00:06.704 ******* 2026-04-08 00:19:56.597286 | orchestrator | changed: [testbed-manager] 2026-04-08 00:19:56.597299 | orchestrator | 2026-04-08 00:19:56.597311 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-08 00:19:56.597324 | orchestrator | Wednesday 08 April 2026 00:19:53 +0000 (0:00:00.495) 0:00:07.200 ******* 2026-04-08 00:19:56.597337 | orchestrator | changed: [testbed-manager] 2026-04-08 00:19:56.597350 | orchestrator | 2026-04-08 00:19:56.597362 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-08 00:19:56.597375 | orchestrator | Wednesday 08 April 2026 00:19:54 +0000 (0:00:01.009) 0:00:08.209 ******* 2026-04-08 00:19:56.597388 | orchestrator | ok: [testbed-manager] 2026-04-08 00:19:56.597401 | orchestrator | 2026-04-08 00:19:56.597433 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-08 00:19:56.597444 | orchestrator | Wednesday 08 April 2026 00:19:55 +0000 (0:00:00.890) 0:00:09.100 ******* 2026-04-08 00:19:56.597455 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-08 00:19:56.597466 | orchestrator | 2026-04-08 00:19:56.597477 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-08 00:19:56.597488 | orchestrator | Wednesday 08 April 2026 00:19:55 +0000 (0:00:00.079) 0:00:09.180 ******* 2026-04-08 00:19:56.597499 | orchestrator | changed: [testbed-manager] 2026-04-08 00:19:56.597510 | orchestrator | 2026-04-08 00:19:56.597520 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:19:56.597533 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-08 00:19:56.597544 | orchestrator | 2026-04-08 00:19:56.597555 | orchestrator | 2026-04-08 00:19:56.597566 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:19:56.597577 | orchestrator | Wednesday 08 April 2026 00:19:56 +0000 (0:00:01.026) 0:00:10.206 ******* 2026-04-08 00:19:56.597588 | orchestrator | =============================================================================== 2026-04-08 00:19:56.597599 | orchestrator | Gathering Facts --------------------------------------------------------- 4.63s 2026-04-08 00:19:56.597609 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.10s 2026-04-08 00:19:56.597620 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.03s 2026-04-08 00:19:56.597631 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.01s 2026-04-08 00:19:56.597642 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.89s 2026-04-08 00:19:56.597653 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.50s 2026-04-08 00:19:56.597681 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.46s 2026-04-08 00:19:56.597693 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-04-08 00:19:56.597704 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-08 00:19:56.597714 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-04-08 00:19:56.597725 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-04-08 00:19:56.597736 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-04-08 00:19:56.597747 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-04-08 00:19:56.713505 | orchestrator | + osism apply sshconfig 2026-04-08 00:20:08.152000 | orchestrator | 2026-04-08 00:20:08 | INFO  | Prepare task for execution of sshconfig. 2026-04-08 00:20:08.221945 | orchestrator | 2026-04-08 00:20:08 | INFO  | Task 81e6e669-f068-4ea6-8539-e7d02fe7ec99 (sshconfig) was prepared for execution. 2026-04-08 00:20:08.222129 | orchestrator | 2026-04-08 00:20:08 | INFO  | It takes a moment until task 81e6e669-f068-4ea6-8539-e7d02fe7ec99 (sshconfig) has been started and output is visible here. 2026-04-08 00:20:18.237236 | orchestrator | 2026-04-08 00:20:18.237368 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-08 00:20:18.237394 | orchestrator | 2026-04-08 00:20:18.237408 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-08 00:20:18.237423 | orchestrator | Wednesday 08 April 2026 00:20:11 +0000 (0:00:00.141) 0:00:00.141 ******* 2026-04-08 00:20:18.237439 | orchestrator | ok: [testbed-manager] 2026-04-08 00:20:18.237456 | orchestrator | 2026-04-08 00:20:18.237472 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-08 00:20:18.237487 | orchestrator | Wednesday 08 April 2026 00:20:11 +0000 (0:00:00.866) 0:00:01.007 ******* 2026-04-08 00:20:18.237537 | orchestrator | changed: [testbed-manager] 2026-04-08 00:20:18.237556 | orchestrator | 2026-04-08 00:20:18.237571 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-08 00:20:18.237589 | orchestrator | Wednesday 08 April 2026 00:20:12 +0000 (0:00:00.457) 0:00:01.464 ******* 2026-04-08 00:20:18.237603 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-08 00:20:18.237619 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-08 00:20:18.237636 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-08 00:20:18.237650 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-08 00:20:18.237664 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-08 00:20:18.237681 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-08 00:20:18.237697 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-08 00:20:18.237713 | orchestrator | 2026-04-08 00:20:18.237729 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-08 00:20:18.237746 | orchestrator | Wednesday 08 April 2026 00:20:17 +0000 (0:00:05.075) 0:00:06.540 ******* 2026-04-08 00:20:18.237761 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:20:18.237777 | orchestrator | 2026-04-08 00:20:18.237793 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-08 00:20:18.237810 | orchestrator | Wednesday 08 April 2026 00:20:17 +0000 (0:00:00.093) 0:00:06.633 ******* 2026-04-08 00:20:18.237825 | orchestrator | changed: [testbed-manager] 2026-04-08 00:20:18.237840 | orchestrator | 2026-04-08 00:20:18.237856 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:20:18.237875 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:20:18.237893 | orchestrator | 2026-04-08 00:20:18.237910 | orchestrator | 2026-04-08 00:20:18.237927 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:20:18.237943 | orchestrator | Wednesday 08 April 2026 00:20:18 +0000 (0:00:00.494) 0:00:07.127 ******* 2026-04-08 00:20:18.237959 | orchestrator | =============================================================================== 2026-04-08 00:20:18.237974 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.08s 2026-04-08 00:20:18.237991 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.87s 2026-04-08 00:20:18.238008 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.49s 2026-04-08 00:20:18.238129 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.46s 2026-04-08 00:20:18.238148 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-04-08 00:20:18.350193 | orchestrator | + osism apply known-hosts 2026-04-08 00:20:29.614858 | orchestrator | 2026-04-08 00:20:29 | INFO  | Prepare task for execution of known-hosts. 2026-04-08 00:20:29.683022 | orchestrator | 2026-04-08 00:20:29 | INFO  | Task 404fe14a-c4b4-42ea-a781-d378a0440987 (known-hosts) was prepared for execution. 2026-04-08 00:20:29.683131 | orchestrator | 2026-04-08 00:20:29 | INFO  | It takes a moment until task 404fe14a-c4b4-42ea-a781-d378a0440987 (known-hosts) has been started and output is visible here. 2026-04-08 00:20:43.980141 | orchestrator | 2026-04-08 00:20:43.980246 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-08 00:20:43.980263 | orchestrator | 2026-04-08 00:20:43.980275 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-08 00:20:43.980288 | orchestrator | Wednesday 08 April 2026 00:20:32 +0000 (0:00:00.143) 0:00:00.143 ******* 2026-04-08 00:20:43.980300 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-08 00:20:43.980312 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-08 00:20:43.980323 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-08 00:20:43.980361 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-08 00:20:43.980372 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-08 00:20:43.980383 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-08 00:20:43.980394 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-08 00:20:43.980405 | orchestrator | 2026-04-08 00:20:43.980417 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-08 00:20:43.980429 | orchestrator | Wednesday 08 April 2026 00:20:38 +0000 (0:00:05.995) 0:00:06.138 ******* 2026-04-08 00:20:43.980452 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-08 00:20:43.980466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-08 00:20:43.980477 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-08 00:20:43.980488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-08 00:20:43.980499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-08 00:20:43.980510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-08 00:20:43.980521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-08 00:20:43.980532 | orchestrator | 2026-04-08 00:20:43.980543 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:20:43.980554 | orchestrator | Wednesday 08 April 2026 00:20:38 +0000 (0:00:00.148) 0:00:06.287 ******* 2026-04-08 00:20:43.980566 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG2Q/7Tth0kcZUv+ROFIgW3353ki5bw6bFd3bslQC4gxkiuXtxLmbutOz2o31O1swLPx/9Q3D+feuWE7PX01RpE=) 2026-04-08 00:20:43.980582 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiRY4JBdsLtuTEvMEGeDyUfIF31kKzz8pKyIDlNo8ji0ffptx6PHyS0NVM43aPcLFWdFovOuNq8Zyv3yMgfWIEs0JC74lTDkA96ttpVmp2EdBMayIpH9flqXmWfpj/8vKu0PmhjD0XFAfN9J88QQ3ypXmB+SlNKG4bYCgh7MzG6KrAYF2/BGSb3HrFn8/BxQdflJAcpidYYM3Vn6TIraxSqjLqLIZcX/wdWvjX56+Mif5AegQqlBKc4aM36wRbO0FHPbHGmBez1xQyCQegcSLJSObLq1TbGtIFe1OZShrOCro2p39u8ApVrInsrQAvVgWs7DzWpAZFUvP65xl1JzVOrUqKiVYBucZwcZ5hl+XhxBU0ZF+E8RHjQTcdQs54f0FO+ecyrBRRhW516Qv0AXiuo075WLnS85l45i34W//IaFc81HVm1ZrwK9Qvh7mxsXHu2Aw6ZUggq9zhIeYoIQ/qr0MgED6H8gz4aVpk6w5UbXh9xLMZTpqjSE7UNxzpcSU=) 2026-04-08 00:20:43.980597 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOYNjMzV46T2pi5rkZiiwQ2qEPoomxm3xWdwjDhlKJ32) 2026-04-08 00:20:43.980610 | orchestrator | 2026-04-08 00:20:43.980621 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:20:43.980632 | orchestrator | Wednesday 08 April 2026 00:20:39 +0000 (0:00:01.129) 0:00:07.416 ******* 2026-04-08 00:20:43.980666 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVPglXwr+PHvlR7YiOKVVDo6/AAJK1deM1D4fvrLqHwHpSlULYW+El/RBss8CDANzIVmmAzhdr+zgyuTx8fKv/LgVCyf6Nv0rd/5ful6ThYEXDIPr7P51bTQuvfCzVu1FKnMOvq/IZlPF3Itw1snGT/6rT1b3rjeTPm0T29b9trbWVI+xXnrKfu3Pf6BSsdxjJZzf1VUqKzvyXNSSUz4hAp1/sBDSb1bfHs5J/qtDZSVH6mB2d3ofd+G/payCCIe55pd/LKW9uWnIQartqDUGCm8s2Z1y+bFyL1svv0GhRQLLHYHKL7Buw/XFfaxVX+urfNqWRphu4H1mW2+RjNBLjRTqIbcB2PyTvjEqf9YVbhI3JdFJo4xk139Bsp50jLJpwD33aIKlNzDXh/ugB8aA36wmnqw/4ThI4ldj6d6tpKqVIECLRxYRMERlttSfCA97V2ooa7H/1j240oOy/Kit34BG7ufLk8eefogMP3Rw+NQUkFgbZv+ZyRG9S25NktEE=) 2026-04-08 00:20:43.980688 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKE1BgBTCRh/27fqgI+566hNP2ad8hyP/nB9kj0LTfjFhz71cV2s/UrPotM73lmJtrm1LkFZm0oF4gnb0wat/vw=) 2026-04-08 00:20:43.980699 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGGZHeGLYBiu95mM8Ybm6Md01aokSDXM/i9GgWqJuN20) 2026-04-08 00:20:43.980710 | orchestrator | 2026-04-08 00:20:43.980721 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:20:43.980732 | orchestrator | Wednesday 08 April 2026 00:20:40 +0000 (0:00:00.954) 0:00:08.370 ******* 2026-04-08 00:20:43.980744 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTK5VmVWnP1JygGzOwp994RvTgipFUJkhvMfPcwI6FiIlWc/01OFkUgi3InoI2VXE9fvuVlXzz/cG1YNlcuUPRNqpxyPPIrXTQJwu2pKZz3pOr/Vu6h6HXylZq12ZTyAnh4hmBg+cRG+rKR/DZb2rv1DzMJicLt0tX63DOt8BCE73tspWCZ75CZyLCllIS//GFZ7sQairRzRDWMGC5qzjWG/Aw/8OQGVJE5hilnMYBQm090VVnzqj559prfvDLbBG8XA2g80Iry9tsx4tBPHK2SPa6W8eJfJMOo5oEbgVf63qY61hcPZlZDEonjH1VxIhgvdtFaRgjj/vv5i/sEkYXZy7VsRdEJfIbsx66blUtA4s8ilreqJAn/z5EtvxnldlHten+wlkCNyHDNDnNxOD3to3oH/XGcOLMN8Wo6JFTW5b0Ufe4gOLzK+lt6FenHna4oC/tPR+Z4kjNncLu8dPFPMBNAw/BljO0+z6CL+UF1jEIrnHnNk3o/ICn35+WJRc=) 2026-04-08 00:20:43.980755 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDmhUvTC0kPKCyNf+1ZoOyCFcdLvezXNoxHRFD2fOljRLBNnmzBTtLV/iCqyhWdLJRbe8LCK27IO6KEfog3TEj8=) 2026-04-08 00:20:43.980833 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA54sAErFdcoZSZHst5lFveQ5WTSLpE5eti0hEUgPWdf) 2026-04-08 00:20:43.980846 | orchestrator | 2026-04-08 00:20:43.980857 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:20:43.980868 | orchestrator | Wednesday 08 April 2026 00:20:41 +0000 (0:00:00.884) 0:00:09.255 ******* 2026-04-08 00:20:43.980879 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5ozyWaEo7o1DuI9GYVpYFDwu2aLolBkr+S3SCZpcCMSgaxuLXl3j3cp5bRO+kCJOnVBd6zDR8nWYIQvufOCiikkpVc4DzHykHTDbzAy6Gu0QqQnaNw5sXFW5tYd8F1jAve10TP9E2/u8d76E09Dzo1Kz8TNPZ1pB/r+7yTgcc4UnQuEX5vvGk/4rnqeoMefRecgyggX3Ou8O1peFTWz7jpYcfg70ixZN6k6plyAlMRRf6FIvhFX9LDpg6z3rFg29VUt9ZKNzptSZQ95qpm/yuHi8UtqXZCh5/HvxQxmfKoyzGOOdVVgUJGjZNAv0+/Kr/VGfYgCmicpTBMWPEfwpQhbzI3r6n7sz9FwDdnPbjgSngWvs76tAshmLEd8b7HggfRJgbix3prSQx2ozIhwM4vSMtd3ao+Mw3bhXGvJG0VKdD1fXFn8Z9ZpQIyCjHJlCBZw3x6bTx5uJgjJC7ztuhhWPp4nolyMHba6Fmm3gP/wjJeEQ2M8k1zqTheGvGAPc=) 2026-04-08 00:20:43.980890 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM/L6GlaHamhDK3H9l/YWd4B1AcUB8wdyKPUXLk5FDe8CTlNy913yc8Ob0UNwuCDpT/7KqDvtDsR1inkRSyNobk=) 2026-04-08 00:20:43.980901 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIORCk6hvYyIXO5WkBbfxloMfz/S+B0Fi9sTOJ/+H8+Ns) 2026-04-08 00:20:43.980912 | orchestrator | 2026-04-08 00:20:43.980923 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:20:43.980934 | orchestrator | Wednesday 08 April 2026 00:20:42 +0000 (0:00:01.017) 0:00:10.272 ******* 2026-04-08 00:20:43.980946 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjIond7BaU3S49CvGiAXb5LOyGQTO1Dpaz5hEfBLM8tVCQ4F10cFs999vQ1embOZiET5bW6UJXblJO7MDmzP65BO96p1gFCIMI8DDfXopbzSEceo6WCEOaNRMhfdOuGXf4DvlApfkh1fEWUNB2msKcbZiVPuKMnwj/zT3nSSkpki2E3OQDWurWiSCKBqOAKRjsrRO4gMIEiS7MwGmge7JnUZ/YPJipTa4L9kTls5S9D7bHasmGlA6Gck9ZhhvLOXuutJMtTdmj7cozZ25SodIv1f5OYXRz4CP0BN204IRe1P4FzAdhkCV7MJodnvyVOyRgib7QiS09qU1hVw1anXQPI/Ig1ZC27z5lsqyHgePhRTm2BipM/MFIwNSUtF2Gtg9h35ipxowTd/z1aPQhpX8WsXIIilt0dTva+eo6JFTW77qG3XbPJZJ6sUaZdORNZFAGZIyC3EoTI1bk0gMXDJ/ChQvN7UpczDJBz0pucGrji7D1cEYtrBw6YSpg+N7DX80=) 2026-04-08 00:20:43.980964 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI56FjEYj6CWlOkcrh7kDN9juKZlu53dgekKDW9PPp7TXkX0hufXSjuIlXUdkdaj6CvVo3NSTILNIGMMBteULI4=) 2026-04-08 00:20:43.980975 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKB8RmjlSvKw1bC0k2WmjlmQcUzOv2r/r73WDltRiOm5) 2026-04-08 00:20:43.980986 | orchestrator | 2026-04-08 00:20:43.980997 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:20:43.981008 | orchestrator | Wednesday 08 April 2026 00:20:43 +0000 (0:00:01.039) 0:00:11.312 ******* 2026-04-08 00:20:43.981054 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDm4hRh1Sosz2f8+IzM4f6aQKXZkbE1H3ohA+DSmWMaQMOS8e7NFl+D+44ChqPmRjS1WuNh/+vL49hjWyXtkL9giTNsY5ziDyvzkKGzUKvb08dBhXCCIIKztz2pBD3TF02XxrBvvNbmqoWqCt12J4YPzNGAkCA9ZxoVqvofL9sIr2O22d0r7diehbWn53ESPufMi2OfFs3GfDrl1smu9ae2hvO4gUYgdV86q17UALqfilg8jiZJsVi6SWeF+f/7pfUAG3DauFat0QlCfBSKjwgKThngnt4GJWrl+mArO/0XyZ3nCbgysp2oGoQftTjeyGA1AzrjiMMaWKLC/kQTs9KMVEpmn1BR+vMxs10THEGciq5xQGHboXA8JXrarV+eTELpCf0KgwtIkY5LJPYQVrNcNfRyKQPa/O8FPWhnS4eS2GzKtVCYjR4EzHkTeo+27B8YAmLSwuLS4hNd67Rk3lFp4Es+80IPu6Suj3nhLHosmHSq8zROqEDkfo7LUhmPhcU=) 2026-04-08 00:20:55.086761 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN7vWZ7ZDssKOV5tvhXCgeCb57tzgofyCKiA4vBYfjJPcNw1Fm8jVkD34/BgoPmHr9cA3/KExMrm43mR8/Mc1qM=) 2026-04-08 00:20:55.086841 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ+S/l+jNjVjrgKHIwPdOglTldNS+OKy6rA1TRwoja87) 2026-04-08 00:20:55.086851 | orchestrator | 2026-04-08 00:20:55.086858 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:20:55.086865 | orchestrator | Wednesday 08 April 2026 00:20:44 +0000 (0:00:01.004) 0:00:12.317 ******* 2026-04-08 00:20:55.086872 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbdOqx5eZljl1gEvjTRCKg8gDRsYq+clX8rqQ22winkPlNTBp1OACDcP9fzyz2MVQuD6vfXmNIv4TmxU8U7R5bSMw/jZ4eGNmn1DHt2n6QM5OmPtqUBvX8I9BuWmCxRPzIsnCVZxVuUFfQqzzpm0sG/0O+sS/cpTCaBiEUnGbR4cdLwBTnwifD6O2GYqVMXr6TsBN0QbYyQgLXQR/D6YlFpCPSAbfUYEkf+ztdMAjB61VWEB6skhnygJw7dTxBkoPXtSgoeFEswkWHrpaMYKlQ9YRPr8VdQcUCmkF24h+JMm70s/KWQB/r2UPN6vhwZ5zMrhQd0wbrRvbH1tbyBesop1YUY56dokzt32gZgxQRDNPW6Mo0jtDCmDapnYz66I2TFviUqWHkUE0vQDxafmUrPkGF3lzTpA6ZCFIdBkn7wF2BLPswiwvZPHFKNHwiHH5jkDK/MzKccIEtz+COdgoZTkM89KcP/15yPL9ki33Z8ElCxASHW13/Q2GdktE8KfE=) 2026-04-08 00:20:55.086879 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKFvtOa6X+tbJ8lY6mUsmbXD9an/NStY3xCCKdkKX0RRO+gejHZJx2tDttqYiltJtgobCr1QlP8VHa7IPR1Z7E4=) 2026-04-08 00:20:55.086885 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGZFAAYmudddnUjFhz8Njmueqd01qXgqTfNu4zz54lNc) 2026-04-08 00:20:55.086891 | orchestrator | 2026-04-08 00:20:55.086897 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-08 00:20:55.086903 | orchestrator | Wednesday 08 April 2026 00:20:45 +0000 (0:00:00.998) 0:00:13.315 ******* 2026-04-08 00:20:55.086910 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-08 00:20:55.086916 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-08 00:20:55.086921 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-08 00:20:55.086927 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-08 00:20:55.086932 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-08 00:20:55.086952 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-08 00:20:55.086958 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-08 00:20:55.086978 | orchestrator | 2026-04-08 00:20:55.086984 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-08 00:20:55.086991 | orchestrator | Wednesday 08 April 2026 00:20:50 +0000 (0:00:05.257) 0:00:18.573 ******* 2026-04-08 00:20:55.086997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-08 00:20:55.087005 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-08 00:20:55.087010 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-08 00:20:55.087048 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-08 00:20:55.087055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-08 00:20:55.087060 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-08 00:20:55.087066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-08 00:20:55.087071 | orchestrator | 2026-04-08 00:20:55.087078 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:20:55.087087 | orchestrator | Wednesday 08 April 2026 00:20:51 +0000 (0:00:00.166) 0:00:18.739 ******* 2026-04-08 00:20:55.087097 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG2Q/7Tth0kcZUv+ROFIgW3353ki5bw6bFd3bslQC4gxkiuXtxLmbutOz2o31O1swLPx/9Q3D+feuWE7PX01RpE=) 2026-04-08 00:20:55.087123 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiRY4JBdsLtuTEvMEGeDyUfIF31kKzz8pKyIDlNo8ji0ffptx6PHyS0NVM43aPcLFWdFovOuNq8Zyv3yMgfWIEs0JC74lTDkA96ttpVmp2EdBMayIpH9flqXmWfpj/8vKu0PmhjD0XFAfN9J88QQ3ypXmB+SlNKG4bYCgh7MzG6KrAYF2/BGSb3HrFn8/BxQdflJAcpidYYM3Vn6TIraxSqjLqLIZcX/wdWvjX56+Mif5AegQqlBKc4aM36wRbO0FHPbHGmBez1xQyCQegcSLJSObLq1TbGtIFe1OZShrOCro2p39u8ApVrInsrQAvVgWs7DzWpAZFUvP65xl1JzVOrUqKiVYBucZwcZ5hl+XhxBU0ZF+E8RHjQTcdQs54f0FO+ecyrBRRhW516Qv0AXiuo075WLnS85l45i34W//IaFc81HVm1ZrwK9Qvh7mxsXHu2Aw6ZUggq9zhIeYoIQ/qr0MgED6H8gz4aVpk6w5UbXh9xLMZTpqjSE7UNxzpcSU=) 2026-04-08 00:20:55.087133 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOYNjMzV46T2pi5rkZiiwQ2qEPoomxm3xWdwjDhlKJ32) 2026-04-08 00:20:55.087141 | orchestrator | 2026-04-08 00:20:55.087149 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:20:55.087157 | orchestrator | Wednesday 08 April 2026 00:20:52 +0000 (0:00:01.029) 0:00:19.768 ******* 2026-04-08 00:20:55.087165 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGGZHeGLYBiu95mM8Ybm6Md01aokSDXM/i9GgWqJuN20) 2026-04-08 00:20:55.087174 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVPglXwr+PHvlR7YiOKVVDo6/AAJK1deM1D4fvrLqHwHpSlULYW+El/RBss8CDANzIVmmAzhdr+zgyuTx8fKv/LgVCyf6Nv0rd/5ful6ThYEXDIPr7P51bTQuvfCzVu1FKnMOvq/IZlPF3Itw1snGT/6rT1b3rjeTPm0T29b9trbWVI+xXnrKfu3Pf6BSsdxjJZzf1VUqKzvyXNSSUz4hAp1/sBDSb1bfHs5J/qtDZSVH6mB2d3ofd+G/payCCIe55pd/LKW9uWnIQartqDUGCm8s2Z1y+bFyL1svv0GhRQLLHYHKL7Buw/XFfaxVX+urfNqWRphu4H1mW2+RjNBLjRTqIbcB2PyTvjEqf9YVbhI3JdFJo4xk139Bsp50jLJpwD33aIKlNzDXh/ugB8aA36wmnqw/4ThI4ldj6d6tpKqVIECLRxYRMERlttSfCA97V2ooa7H/1j240oOy/Kit34BG7ufLk8eefogMP3Rw+NQUkFgbZv+ZyRG9S25NktEE=) 2026-04-08 00:20:55.087191 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKE1BgBTCRh/27fqgI+566hNP2ad8hyP/nB9kj0LTfjFhz71cV2s/UrPotM73lmJtrm1LkFZm0oF4gnb0wat/vw=) 2026-04-08 00:20:55.087201 | orchestrator | 2026-04-08 00:20:55.087210 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:20:55.087219 | orchestrator | Wednesday 08 April 2026 00:20:53 +0000 (0:00:01.011) 0:00:20.780 ******* 2026-04-08 00:20:55.087228 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA54sAErFdcoZSZHst5lFveQ5WTSLpE5eti0hEUgPWdf) 2026-04-08 00:20:55.087237 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTK5VmVWnP1JygGzOwp994RvTgipFUJkhvMfPcwI6FiIlWc/01OFkUgi3InoI2VXE9fvuVlXzz/cG1YNlcuUPRNqpxyPPIrXTQJwu2pKZz3pOr/Vu6h6HXylZq12ZTyAnh4hmBg+cRG+rKR/DZb2rv1DzMJicLt0tX63DOt8BCE73tspWCZ75CZyLCllIS//GFZ7sQairRzRDWMGC5qzjWG/Aw/8OQGVJE5hilnMYBQm090VVnzqj559prfvDLbBG8XA2g80Iry9tsx4tBPHK2SPa6W8eJfJMOo5oEbgVf63qY61hcPZlZDEonjH1VxIhgvdtFaRgjj/vv5i/sEkYXZy7VsRdEJfIbsx66blUtA4s8ilreqJAn/z5EtvxnldlHten+wlkCNyHDNDnNxOD3to3oH/XGcOLMN8Wo6JFTW5b0Ufe4gOLzK+lt6FenHna4oC/tPR+Z4kjNncLu8dPFPMBNAw/BljO0+z6CL+UF1jEIrnHnNk3o/ICn35+WJRc=) 2026-04-08 00:20:55.087246 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDmhUvTC0kPKCyNf+1ZoOyCFcdLvezXNoxHRFD2fOljRLBNnmzBTtLV/iCqyhWdLJRbe8LCK27IO6KEfog3TEj8=) 2026-04-08 00:20:55.087256 | orchestrator | 2026-04-08 00:20:55.087263 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:20:55.087268 | orchestrator | Wednesday 08 April 2026 00:20:54 +0000 (0:00:01.008) 0:00:21.788 ******* 2026-04-08 00:20:55.087274 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM/L6GlaHamhDK3H9l/YWd4B1AcUB8wdyKPUXLk5FDe8CTlNy913yc8Ob0UNwuCDpT/7KqDvtDsR1inkRSyNobk=) 2026-04-08 00:20:55.087285 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5ozyWaEo7o1DuI9GYVpYFDwu2aLolBkr+S3SCZpcCMSgaxuLXl3j3cp5bRO+kCJOnVBd6zDR8nWYIQvufOCiikkpVc4DzHykHTDbzAy6Gu0QqQnaNw5sXFW5tYd8F1jAve10TP9E2/u8d76E09Dzo1Kz8TNPZ1pB/r+7yTgcc4UnQuEX5vvGk/4rnqeoMefRecgyggX3Ou8O1peFTWz7jpYcfg70ixZN6k6plyAlMRRf6FIvhFX9LDpg6z3rFg29VUt9ZKNzptSZQ95qpm/yuHi8UtqXZCh5/HvxQxmfKoyzGOOdVVgUJGjZNAv0+/Kr/VGfYgCmicpTBMWPEfwpQhbzI3r6n7sz9FwDdnPbjgSngWvs76tAshmLEd8b7HggfRJgbix3prSQx2ozIhwM4vSMtd3ao+Mw3bhXGvJG0VKdD1fXFn8Z9ZpQIyCjHJlCBZw3x6bTx5uJgjJC7ztuhhWPp4nolyMHba6Fmm3gP/wjJeEQ2M8k1zqTheGvGAPc=) 2026-04-08 00:20:55.087300 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIORCk6hvYyIXO5WkBbfxloMfz/S+B0Fi9sTOJ/+H8+Ns) 2026-04-08 00:20:59.130889 | orchestrator | 2026-04-08 00:20:59.130990 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:20:59.131007 | orchestrator | Wednesday 08 April 2026 00:20:55 +0000 (0:00:01.017) 0:00:22.805 ******* 2026-04-08 00:20:59.131067 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI56FjEYj6CWlOkcrh7kDN9juKZlu53dgekKDW9PPp7TXkX0hufXSjuIlXUdkdaj6CvVo3NSTILNIGMMBteULI4=) 2026-04-08 00:20:59.131100 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjIond7BaU3S49CvGiAXb5LOyGQTO1Dpaz5hEfBLM8tVCQ4F10cFs999vQ1embOZiET5bW6UJXblJO7MDmzP65BO96p1gFCIMI8DDfXopbzSEceo6WCEOaNRMhfdOuGXf4DvlApfkh1fEWUNB2msKcbZiVPuKMnwj/zT3nSSkpki2E3OQDWurWiSCKBqOAKRjsrRO4gMIEiS7MwGmge7JnUZ/YPJipTa4L9kTls5S9D7bHasmGlA6Gck9ZhhvLOXuutJMtTdmj7cozZ25SodIv1f5OYXRz4CP0BN204IRe1P4FzAdhkCV7MJodnvyVOyRgib7QiS09qU1hVw1anXQPI/Ig1ZC27z5lsqyHgePhRTm2BipM/MFIwNSUtF2Gtg9h35ipxowTd/z1aPQhpX8WsXIIilt0dTva+eo6JFTW77qG3XbPJZJ6sUaZdORNZFAGZIyC3EoTI1bk0gMXDJ/ChQvN7UpczDJBz0pucGrji7D1cEYtrBw6YSpg+N7DX80=) 2026-04-08 00:20:59.131140 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKB8RmjlSvKw1bC0k2WmjlmQcUzOv2r/r73WDltRiOm5) 2026-04-08 00:20:59.131154 | orchestrator | 2026-04-08 00:20:59.131165 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:20:59.131176 | orchestrator | Wednesday 08 April 2026 00:20:56 +0000 (0:00:01.006) 0:00:23.812 ******* 2026-04-08 00:20:59.131187 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ+S/l+jNjVjrgKHIwPdOglTldNS+OKy6rA1TRwoja87) 2026-04-08 00:20:59.131198 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDm4hRh1Sosz2f8+IzM4f6aQKXZkbE1H3ohA+DSmWMaQMOS8e7NFl+D+44ChqPmRjS1WuNh/+vL49hjWyXtkL9giTNsY5ziDyvzkKGzUKvb08dBhXCCIIKztz2pBD3TF02XxrBvvNbmqoWqCt12J4YPzNGAkCA9ZxoVqvofL9sIr2O22d0r7diehbWn53ESPufMi2OfFs3GfDrl1smu9ae2hvO4gUYgdV86q17UALqfilg8jiZJsVi6SWeF+f/7pfUAG3DauFat0QlCfBSKjwgKThngnt4GJWrl+mArO/0XyZ3nCbgysp2oGoQftTjeyGA1AzrjiMMaWKLC/kQTs9KMVEpmn1BR+vMxs10THEGciq5xQGHboXA8JXrarV+eTELpCf0KgwtIkY5LJPYQVrNcNfRyKQPa/O8FPWhnS4eS2GzKtVCYjR4EzHkTeo+27B8YAmLSwuLS4hNd67Rk3lFp4Es+80IPu6Suj3nhLHosmHSq8zROqEDkfo7LUhmPhcU=) 2026-04-08 00:20:59.131210 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN7vWZ7ZDssKOV5tvhXCgeCb57tzgofyCKiA4vBYfjJPcNw1Fm8jVkD34/BgoPmHr9cA3/KExMrm43mR8/Mc1qM=) 2026-04-08 00:20:59.131221 | orchestrator | 2026-04-08 00:20:59.131232 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:20:59.131243 | orchestrator | Wednesday 08 April 2026 00:20:57 +0000 (0:00:01.008) 0:00:24.820 ******* 2026-04-08 00:20:59.131254 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbdOqx5eZljl1gEvjTRCKg8gDRsYq+clX8rqQ22winkPlNTBp1OACDcP9fzyz2MVQuD6vfXmNIv4TmxU8U7R5bSMw/jZ4eGNmn1DHt2n6QM5OmPtqUBvX8I9BuWmCxRPzIsnCVZxVuUFfQqzzpm0sG/0O+sS/cpTCaBiEUnGbR4cdLwBTnwifD6O2GYqVMXr6TsBN0QbYyQgLXQR/D6YlFpCPSAbfUYEkf+ztdMAjB61VWEB6skhnygJw7dTxBkoPXtSgoeFEswkWHrpaMYKlQ9YRPr8VdQcUCmkF24h+JMm70s/KWQB/r2UPN6vhwZ5zMrhQd0wbrRvbH1tbyBesop1YUY56dokzt32gZgxQRDNPW6Mo0jtDCmDapnYz66I2TFviUqWHkUE0vQDxafmUrPkGF3lzTpA6ZCFIdBkn7wF2BLPswiwvZPHFKNHwiHH5jkDK/MzKccIEtz+COdgoZTkM89KcP/15yPL9ki33Z8ElCxASHW13/Q2GdktE8KfE=) 2026-04-08 00:20:59.131266 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKFvtOa6X+tbJ8lY6mUsmbXD9an/NStY3xCCKdkKX0RRO+gejHZJx2tDttqYiltJtgobCr1QlP8VHa7IPR1Z7E4=) 2026-04-08 00:20:59.131277 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGZFAAYmudddnUjFhz8Njmueqd01qXgqTfNu4zz54lNc) 2026-04-08 00:20:59.131288 | orchestrator | 2026-04-08 00:20:59.131299 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-08 00:20:59.131309 | orchestrator | Wednesday 08 April 2026 00:20:58 +0000 (0:00:01.020) 0:00:25.840 ******* 2026-04-08 00:20:59.131321 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-08 00:20:59.131332 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-08 00:20:59.131343 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-08 00:20:59.131353 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-08 00:20:59.131364 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-08 00:20:59.131375 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-08 00:20:59.131386 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-08 00:20:59.131397 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:20:59.131408 | orchestrator | 2026-04-08 00:20:59.131436 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-08 00:20:59.131449 | orchestrator | Wednesday 08 April 2026 00:20:58 +0000 (0:00:00.185) 0:00:26.025 ******* 2026-04-08 00:20:59.131470 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:20:59.131483 | orchestrator | 2026-04-08 00:20:59.131495 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-08 00:20:59.131508 | orchestrator | Wednesday 08 April 2026 00:20:58 +0000 (0:00:00.047) 0:00:26.073 ******* 2026-04-08 00:20:59.131521 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:20:59.131534 | orchestrator | 2026-04-08 00:20:59.131546 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-08 00:20:59.131560 | orchestrator | Wednesday 08 April 2026 00:20:58 +0000 (0:00:00.046) 0:00:26.119 ******* 2026-04-08 00:20:59.131572 | orchestrator | changed: [testbed-manager] 2026-04-08 00:20:59.131585 | orchestrator | 2026-04-08 00:20:59.131598 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:20:59.131611 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-08 00:20:59.131624 | orchestrator | 2026-04-08 00:20:59.131635 | orchestrator | 2026-04-08 00:20:59.131646 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:20:59.131657 | orchestrator | Wednesday 08 April 2026 00:20:58 +0000 (0:00:00.508) 0:00:26.628 ******* 2026-04-08 00:20:59.131668 | orchestrator | =============================================================================== 2026-04-08 00:20:59.131678 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.00s 2026-04-08 00:20:59.131689 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.26s 2026-04-08 00:20:59.131701 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-04-08 00:20:59.131712 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-08 00:20:59.131722 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-08 00:20:59.131733 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-08 00:20:59.131744 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-08 00:20:59.131754 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-08 00:20:59.131765 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-08 00:20:59.131776 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-08 00:20:59.131787 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-08 00:20:59.131797 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-08 00:20:59.131808 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-08 00:20:59.131819 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-08 00:20:59.131838 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-04-08 00:20:59.131849 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.88s 2026-04-08 00:20:59.131860 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.51s 2026-04-08 00:20:59.131871 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2026-04-08 00:20:59.131882 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-04-08 00:20:59.131893 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2026-04-08 00:20:59.312229 | orchestrator | + osism apply squid 2026-04-08 00:21:10.618079 | orchestrator | 2026-04-08 00:21:10 | INFO  | Prepare task for execution of squid. 2026-04-08 00:21:10.692282 | orchestrator | 2026-04-08 00:21:10 | INFO  | Task c4ed3c49-b5cc-4c6c-9034-04e6ff6c5922 (squid) was prepared for execution. 2026-04-08 00:21:10.693041 | orchestrator | 2026-04-08 00:21:10 | INFO  | It takes a moment until task c4ed3c49-b5cc-4c6c-9034-04e6ff6c5922 (squid) has been started and output is visible here. 2026-04-08 00:23:06.623119 | orchestrator | 2026-04-08 00:23:06.623259 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-08 00:23:06.623278 | orchestrator | 2026-04-08 00:23:06.623292 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-08 00:23:06.623304 | orchestrator | Wednesday 08 April 2026 00:21:13 +0000 (0:00:00.185) 0:00:00.185 ******* 2026-04-08 00:23:06.623315 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-08 00:23:06.623328 | orchestrator | 2026-04-08 00:23:06.623339 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-08 00:23:06.623350 | orchestrator | Wednesday 08 April 2026 00:21:13 +0000 (0:00:00.074) 0:00:00.260 ******* 2026-04-08 00:23:06.623361 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:06.623373 | orchestrator | 2026-04-08 00:23:06.623384 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-08 00:23:06.623395 | orchestrator | Wednesday 08 April 2026 00:21:16 +0000 (0:00:02.284) 0:00:02.544 ******* 2026-04-08 00:23:06.623407 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-08 00:23:06.623418 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-08 00:23:06.623429 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-08 00:23:06.623440 | orchestrator | 2026-04-08 00:23:06.623451 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-08 00:23:06.623462 | orchestrator | Wednesday 08 April 2026 00:21:17 +0000 (0:00:01.192) 0:00:03.737 ******* 2026-04-08 00:23:06.623473 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-08 00:23:06.623484 | orchestrator | 2026-04-08 00:23:06.623496 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-08 00:23:06.623507 | orchestrator | Wednesday 08 April 2026 00:21:18 +0000 (0:00:01.040) 0:00:04.777 ******* 2026-04-08 00:23:06.623517 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:06.623528 | orchestrator | 2026-04-08 00:23:06.623539 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-08 00:23:06.623569 | orchestrator | Wednesday 08 April 2026 00:21:18 +0000 (0:00:00.337) 0:00:05.115 ******* 2026-04-08 00:23:06.623581 | orchestrator | changed: [testbed-manager] 2026-04-08 00:23:06.623592 | orchestrator | 2026-04-08 00:23:06.623603 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-08 00:23:06.623614 | orchestrator | Wednesday 08 April 2026 00:21:19 +0000 (0:00:00.880) 0:00:05.995 ******* 2026-04-08 00:23:06.623630 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-08 00:23:06.623655 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:06.623683 | orchestrator | 2026-04-08 00:23:06.623702 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-08 00:23:06.623720 | orchestrator | Wednesday 08 April 2026 00:21:53 +0000 (0:00:34.237) 0:00:40.232 ******* 2026-04-08 00:23:06.623739 | orchestrator | changed: [testbed-manager] 2026-04-08 00:23:06.623757 | orchestrator | 2026-04-08 00:23:06.623776 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-08 00:23:06.623794 | orchestrator | Wednesday 08 April 2026 00:22:05 +0000 (0:00:12.013) 0:00:52.246 ******* 2026-04-08 00:23:06.623813 | orchestrator | Pausing for 60 seconds 2026-04-08 00:23:06.623833 | orchestrator | changed: [testbed-manager] 2026-04-08 00:23:06.623851 | orchestrator | 2026-04-08 00:23:06.623871 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-08 00:23:06.623891 | orchestrator | Wednesday 08 April 2026 00:23:05 +0000 (0:01:00.088) 0:01:52.334 ******* 2026-04-08 00:23:06.623940 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:06.623957 | orchestrator | 2026-04-08 00:23:06.623972 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-08 00:23:06.624012 | orchestrator | Wednesday 08 April 2026 00:23:05 +0000 (0:00:00.050) 0:01:52.385 ******* 2026-04-08 00:23:06.624023 | orchestrator | changed: [testbed-manager] 2026-04-08 00:23:06.624034 | orchestrator | 2026-04-08 00:23:06.624045 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:23:06.624056 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:23:06.624067 | orchestrator | 2026-04-08 00:23:06.624078 | orchestrator | 2026-04-08 00:23:06.624089 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:23:06.624100 | orchestrator | Wednesday 08 April 2026 00:23:06 +0000 (0:00:00.525) 0:01:52.911 ******* 2026-04-08 00:23:06.624111 | orchestrator | =============================================================================== 2026-04-08 00:23:06.624122 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-04-08 00:23:06.624133 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 34.24s 2026-04-08 00:23:06.624143 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.01s 2026-04-08 00:23:06.624154 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.28s 2026-04-08 00:23:06.624165 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.19s 2026-04-08 00:23:06.624176 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.04s 2026-04-08 00:23:06.624186 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2026-04-08 00:23:06.624197 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.53s 2026-04-08 00:23:06.624208 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2026-04-08 00:23:06.624219 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-04-08 00:23:06.624230 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.05s 2026-04-08 00:23:06.780467 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-08 00:23:06.780552 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-04-08 00:23:06.783754 | orchestrator | + set -e 2026-04-08 00:23:06.783789 | orchestrator | + NAMESPACE=kolla 2026-04-08 00:23:06.783800 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-08 00:23:06.790240 | orchestrator | ++ semver latest 9.0.0 2026-04-08 00:23:06.836410 | orchestrator | + [[ -1 -lt 0 ]] 2026-04-08 00:23:06.836503 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-08 00:23:06.836624 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-08 00:23:18.204893 | orchestrator | 2026-04-08 00:23:18 | INFO  | Prepare task for execution of operator. 2026-04-08 00:23:18.290973 | orchestrator | 2026-04-08 00:23:18 | INFO  | Task 89ed4f04-773e-41a2-ab8b-1a5431882ec8 (operator) was prepared for execution. 2026-04-08 00:23:18.291042 | orchestrator | 2026-04-08 00:23:18 | INFO  | It takes a moment until task 89ed4f04-773e-41a2-ab8b-1a5431882ec8 (operator) has been started and output is visible here. 2026-04-08 00:23:33.495967 | orchestrator | 2026-04-08 00:23:33.496072 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-08 00:23:33.496089 | orchestrator | 2026-04-08 00:23:33.496101 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:23:33.496113 | orchestrator | Wednesday 08 April 2026 00:23:21 +0000 (0:00:00.185) 0:00:00.185 ******* 2026-04-08 00:23:33.496124 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:33.496137 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:33.496148 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:33.496159 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:33.496170 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:33.496180 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:33.496195 | orchestrator | 2026-04-08 00:23:33.496207 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-08 00:23:33.496240 | orchestrator | Wednesday 08 April 2026 00:23:24 +0000 (0:00:03.414) 0:00:03.600 ******* 2026-04-08 00:23:33.496251 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:33.496262 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:33.496286 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:33.496297 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:33.496308 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:33.496318 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:33.496329 | orchestrator | 2026-04-08 00:23:33.496340 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-08 00:23:33.496351 | orchestrator | 2026-04-08 00:23:33.496362 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-08 00:23:33.496373 | orchestrator | Wednesday 08 April 2026 00:23:25 +0000 (0:00:00.797) 0:00:04.397 ******* 2026-04-08 00:23:33.496384 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:33.496395 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:33.496406 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:33.496417 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:33.496427 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:33.496438 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:33.496449 | orchestrator | 2026-04-08 00:23:33.496460 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-08 00:23:33.496471 | orchestrator | Wednesday 08 April 2026 00:23:25 +0000 (0:00:00.184) 0:00:04.581 ******* 2026-04-08 00:23:33.496481 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:33.496492 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:33.496503 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:33.496513 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:33.496524 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:33.496535 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:33.496545 | orchestrator | 2026-04-08 00:23:33.496568 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-08 00:23:33.496580 | orchestrator | Wednesday 08 April 2026 00:23:26 +0000 (0:00:00.151) 0:00:04.733 ******* 2026-04-08 00:23:33.496591 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:23:33.496603 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:23:33.496614 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:23:33.496625 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:23:33.496636 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:23:33.496646 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:23:33.496657 | orchestrator | 2026-04-08 00:23:33.496669 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-08 00:23:33.496679 | orchestrator | Wednesday 08 April 2026 00:23:26 +0000 (0:00:00.682) 0:00:05.416 ******* 2026-04-08 00:23:33.496690 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:23:33.496701 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:23:33.496712 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:23:33.496723 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:23:33.496734 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:23:33.496744 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:23:33.496756 | orchestrator | 2026-04-08 00:23:33.496767 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-08 00:23:33.496778 | orchestrator | Wednesday 08 April 2026 00:23:27 +0000 (0:00:00.877) 0:00:06.293 ******* 2026-04-08 00:23:33.496789 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-08 00:23:33.496800 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-08 00:23:33.496811 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-08 00:23:33.496822 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-08 00:23:33.496833 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-08 00:23:33.496844 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-08 00:23:33.496854 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-08 00:23:33.496865 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-08 00:23:33.496876 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-08 00:23:33.496915 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-08 00:23:33.496927 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-08 00:23:33.496938 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-08 00:23:33.496948 | orchestrator | 2026-04-08 00:23:33.496959 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-08 00:23:33.496970 | orchestrator | Wednesday 08 April 2026 00:23:28 +0000 (0:00:01.285) 0:00:07.579 ******* 2026-04-08 00:23:33.496981 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:23:33.496992 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:23:33.497003 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:23:33.497014 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:23:33.497024 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:23:33.497035 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:23:33.497046 | orchestrator | 2026-04-08 00:23:33.497057 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-08 00:23:33.497069 | orchestrator | Wednesday 08 April 2026 00:23:30 +0000 (0:00:01.365) 0:00:08.944 ******* 2026-04-08 00:23:33.497079 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-08 00:23:33.497091 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-08 00:23:33.497102 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-08 00:23:33.497113 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-08 00:23:33.497124 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-08 00:23:33.497152 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-08 00:23:33.497164 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-08 00:23:33.497175 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-08 00:23:33.497186 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-08 00:23:33.497196 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-08 00:23:33.497207 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-08 00:23:33.497218 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-08 00:23:33.497229 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-08 00:23:33.497240 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-08 00:23:33.497251 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-08 00:23:33.497267 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-08 00:23:33.497278 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-08 00:23:33.497289 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-08 00:23:33.497300 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-08 00:23:33.497311 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-08 00:23:33.497321 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-08 00:23:33.497332 | orchestrator | 2026-04-08 00:23:33.497343 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-08 00:23:33.497355 | orchestrator | Wednesday 08 April 2026 00:23:31 +0000 (0:00:01.249) 0:00:10.194 ******* 2026-04-08 00:23:33.497365 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:23:33.497376 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:23:33.497387 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:23:33.497398 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:23:33.497409 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:23:33.497420 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:23:33.497430 | orchestrator | 2026-04-08 00:23:33.497441 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-08 00:23:33.497459 | orchestrator | Wednesday 08 April 2026 00:23:31 +0000 (0:00:00.156) 0:00:10.350 ******* 2026-04-08 00:23:33.497470 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:23:33.497481 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:23:33.497491 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:23:33.497502 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:23:33.497513 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:23:33.497524 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:23:33.497534 | orchestrator | 2026-04-08 00:23:33.497545 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-08 00:23:33.497556 | orchestrator | Wednesday 08 April 2026 00:23:31 +0000 (0:00:00.162) 0:00:10.513 ******* 2026-04-08 00:23:33.497567 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:23:33.497578 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:23:33.497589 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:23:33.497600 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:23:33.497610 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:23:33.497621 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:23:33.497632 | orchestrator | 2026-04-08 00:23:33.497643 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-08 00:23:33.497653 | orchestrator | Wednesday 08 April 2026 00:23:32 +0000 (0:00:00.617) 0:00:11.131 ******* 2026-04-08 00:23:33.497664 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:23:33.497675 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:23:33.497686 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:23:33.497696 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:23:33.497707 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:23:33.497718 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:23:33.497728 | orchestrator | 2026-04-08 00:23:33.497739 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-08 00:23:33.497750 | orchestrator | Wednesday 08 April 2026 00:23:32 +0000 (0:00:00.151) 0:00:11.283 ******* 2026-04-08 00:23:33.497761 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-08 00:23:33.497771 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:23:33.497782 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-08 00:23:33.497793 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-08 00:23:33.497804 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:23:33.497814 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-08 00:23:33.497825 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-08 00:23:33.497836 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:23:33.497847 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:23:33.497857 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:23:33.497868 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-08 00:23:33.497879 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:23:33.497939 | orchestrator | 2026-04-08 00:23:33.497953 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-08 00:23:33.497964 | orchestrator | Wednesday 08 April 2026 00:23:33 +0000 (0:00:00.687) 0:00:11.970 ******* 2026-04-08 00:23:33.497974 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:23:33.497985 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:23:33.497996 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:23:33.498007 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:23:33.498096 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:23:33.498110 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:23:33.498122 | orchestrator | 2026-04-08 00:23:33.498133 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-08 00:23:33.498144 | orchestrator | Wednesday 08 April 2026 00:23:33 +0000 (0:00:00.124) 0:00:12.095 ******* 2026-04-08 00:23:33.498155 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:23:33.498166 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:23:33.498233 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:23:33.498246 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:23:33.498276 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:23:34.774359 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:23:34.774470 | orchestrator | 2026-04-08 00:23:34.774486 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-08 00:23:34.774500 | orchestrator | Wednesday 08 April 2026 00:23:33 +0000 (0:00:00.139) 0:00:12.234 ******* 2026-04-08 00:23:34.774511 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:23:34.774522 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:23:34.774533 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:23:34.774545 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:23:34.774555 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:23:34.774566 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:23:34.774577 | orchestrator | 2026-04-08 00:23:34.774588 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-08 00:23:34.774599 | orchestrator | Wednesday 08 April 2026 00:23:33 +0000 (0:00:00.149) 0:00:12.383 ******* 2026-04-08 00:23:34.774610 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:23:34.774621 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:23:34.774632 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:23:34.774643 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:23:34.774653 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:23:34.774664 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:23:34.774675 | orchestrator | 2026-04-08 00:23:34.774686 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-08 00:23:34.774697 | orchestrator | Wednesday 08 April 2026 00:23:34 +0000 (0:00:00.641) 0:00:13.025 ******* 2026-04-08 00:23:34.774708 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:23:34.774719 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:23:34.774730 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:23:34.774740 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:23:34.774751 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:23:34.774762 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:23:34.774773 | orchestrator | 2026-04-08 00:23:34.774784 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:23:34.774796 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 00:23:34.774809 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 00:23:34.774842 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 00:23:34.774853 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 00:23:34.774865 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 00:23:34.774878 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 00:23:34.774918 | orchestrator | 2026-04-08 00:23:34.774931 | orchestrator | 2026-04-08 00:23:34.774944 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:23:34.774957 | orchestrator | Wednesday 08 April 2026 00:23:34 +0000 (0:00:00.233) 0:00:13.258 ******* 2026-04-08 00:23:34.774970 | orchestrator | =============================================================================== 2026-04-08 00:23:34.774983 | orchestrator | Gathering Facts --------------------------------------------------------- 3.41s 2026-04-08 00:23:34.774996 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.37s 2026-04-08 00:23:34.775009 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.29s 2026-04-08 00:23:34.775043 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.25s 2026-04-08 00:23:34.775057 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.88s 2026-04-08 00:23:34.775069 | orchestrator | Do not require tty for all users ---------------------------------------- 0.80s 2026-04-08 00:23:34.775082 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2026-04-08 00:23:34.775094 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.68s 2026-04-08 00:23:34.775107 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2026-04-08 00:23:34.775120 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.62s 2026-04-08 00:23:34.775133 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-04-08 00:23:34.775146 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2026-04-08 00:23:34.775160 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-04-08 00:23:34.775172 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-04-08 00:23:34.775185 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-04-08 00:23:34.775198 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-04-08 00:23:34.775211 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-04-08 00:23:34.775223 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-04-08 00:23:34.775234 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.12s 2026-04-08 00:23:34.998667 | orchestrator | + osism apply --environment custom facts 2026-04-08 00:23:36.275051 | orchestrator | 2026-04-08 00:23:36 | INFO  | Trying to run play facts in environment custom 2026-04-08 00:23:46.339935 | orchestrator | 2026-04-08 00:23:46 | INFO  | Prepare task for execution of facts. 2026-04-08 00:23:46.408516 | orchestrator | 2026-04-08 00:23:46 | INFO  | Task 209a87ed-b7c2-43ba-ba03-fc4bc64e84c1 (facts) was prepared for execution. 2026-04-08 00:23:46.408601 | orchestrator | 2026-04-08 00:23:46 | INFO  | It takes a moment until task 209a87ed-b7c2-43ba-ba03-fc4bc64e84c1 (facts) has been started and output is visible here. 2026-04-08 00:24:28.151563 | orchestrator | 2026-04-08 00:24:28.151671 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-08 00:24:28.151682 | orchestrator | 2026-04-08 00:24:28.151689 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-08 00:24:28.151708 | orchestrator | Wednesday 08 April 2026 00:23:49 +0000 (0:00:00.106) 0:00:00.106 ******* 2026-04-08 00:24:28.151715 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:28.151722 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:24:28.151729 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:24:28.151735 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:24:28.151741 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:24:28.151746 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:24:28.151752 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:24:28.151758 | orchestrator | 2026-04-08 00:24:28.151764 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-08 00:24:28.151770 | orchestrator | Wednesday 08 April 2026 00:23:50 +0000 (0:00:01.345) 0:00:01.452 ******* 2026-04-08 00:24:28.151776 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:28.151782 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:24:28.151788 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:24:28.151794 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:24:28.151800 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:24:28.151806 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:24:28.151812 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:24:28.151818 | orchestrator | 2026-04-08 00:24:28.151841 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-08 00:24:28.151847 | orchestrator | 2026-04-08 00:24:28.151896 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-08 00:24:28.151903 | orchestrator | Wednesday 08 April 2026 00:23:51 +0000 (0:00:01.170) 0:00:02.623 ******* 2026-04-08 00:24:28.151909 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:28.151916 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:28.151926 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:28.151936 | orchestrator | 2026-04-08 00:24:28.151945 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-08 00:24:28.151955 | orchestrator | Wednesday 08 April 2026 00:23:51 +0000 (0:00:00.093) 0:00:02.716 ******* 2026-04-08 00:24:28.151965 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:28.151973 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:28.151982 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:28.151991 | orchestrator | 2026-04-08 00:24:28.152000 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-08 00:24:28.152008 | orchestrator | Wednesday 08 April 2026 00:23:51 +0000 (0:00:00.162) 0:00:02.878 ******* 2026-04-08 00:24:28.152018 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:28.152027 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:28.152037 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:28.152046 | orchestrator | 2026-04-08 00:24:28.152056 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-08 00:24:28.152066 | orchestrator | Wednesday 08 April 2026 00:23:52 +0000 (0:00:00.161) 0:00:03.039 ******* 2026-04-08 00:24:28.152077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:24:28.152087 | orchestrator | 2026-04-08 00:24:28.152097 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-08 00:24:28.152106 | orchestrator | Wednesday 08 April 2026 00:23:52 +0000 (0:00:00.116) 0:00:03.156 ******* 2026-04-08 00:24:28.152115 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:28.152124 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:28.152134 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:28.152144 | orchestrator | 2026-04-08 00:24:28.152155 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-08 00:24:28.152165 | orchestrator | Wednesday 08 April 2026 00:23:52 +0000 (0:00:00.415) 0:00:03.572 ******* 2026-04-08 00:24:28.152175 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:24:28.152186 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:24:28.152196 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:24:28.152206 | orchestrator | 2026-04-08 00:24:28.152216 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-08 00:24:28.152226 | orchestrator | Wednesday 08 April 2026 00:23:52 +0000 (0:00:00.084) 0:00:03.656 ******* 2026-04-08 00:24:28.152235 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:24:28.152242 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:24:28.152249 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:24:28.152256 | orchestrator | 2026-04-08 00:24:28.152262 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-08 00:24:28.152269 | orchestrator | Wednesday 08 April 2026 00:23:53 +0000 (0:00:00.993) 0:00:04.650 ******* 2026-04-08 00:24:28.152276 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:28.152283 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:28.152290 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:28.152297 | orchestrator | 2026-04-08 00:24:28.152304 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-08 00:24:28.152310 | orchestrator | Wednesday 08 April 2026 00:23:54 +0000 (0:00:00.438) 0:00:05.089 ******* 2026-04-08 00:24:28.152317 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:24:28.152324 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:24:28.152330 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:24:28.152337 | orchestrator | 2026-04-08 00:24:28.152352 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-08 00:24:28.152359 | orchestrator | Wednesday 08 April 2026 00:23:55 +0000 (0:00:01.062) 0:00:06.152 ******* 2026-04-08 00:24:28.152366 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:24:28.152372 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:24:28.152379 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:24:28.152385 | orchestrator | 2026-04-08 00:24:28.152392 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-08 00:24:28.152399 | orchestrator | Wednesday 08 April 2026 00:24:11 +0000 (0:00:16.370) 0:00:22.522 ******* 2026-04-08 00:24:28.152405 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:24:28.152412 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:24:28.152419 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:24:28.152426 | orchestrator | 2026-04-08 00:24:28.152432 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-08 00:24:28.152455 | orchestrator | Wednesday 08 April 2026 00:24:11 +0000 (0:00:00.081) 0:00:22.604 ******* 2026-04-08 00:24:28.152462 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:24:28.152469 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:24:28.152476 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:24:28.152483 | orchestrator | 2026-04-08 00:24:28.152490 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-08 00:24:28.152497 | orchestrator | Wednesday 08 April 2026 00:24:19 +0000 (0:00:07.598) 0:00:30.202 ******* 2026-04-08 00:24:28.152504 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:28.152511 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:28.152518 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:28.152523 | orchestrator | 2026-04-08 00:24:28.152529 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-08 00:24:28.152535 | orchestrator | Wednesday 08 April 2026 00:24:19 +0000 (0:00:00.442) 0:00:30.645 ******* 2026-04-08 00:24:28.152542 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-08 00:24:28.152548 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-08 00:24:28.152554 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-08 00:24:28.152560 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-08 00:24:28.152566 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-08 00:24:28.152571 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-08 00:24:28.152577 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-08 00:24:28.152583 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-08 00:24:28.152589 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-08 00:24:28.152595 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-08 00:24:28.152601 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-08 00:24:28.152606 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-08 00:24:28.152612 | orchestrator | 2026-04-08 00:24:28.152618 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-08 00:24:28.152624 | orchestrator | Wednesday 08 April 2026 00:24:23 +0000 (0:00:03.489) 0:00:34.134 ******* 2026-04-08 00:24:28.152630 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:28.152636 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:28.152641 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:28.152647 | orchestrator | 2026-04-08 00:24:28.152653 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-08 00:24:28.152659 | orchestrator | 2026-04-08 00:24:28.152665 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-08 00:24:28.152671 | orchestrator | Wednesday 08 April 2026 00:24:24 +0000 (0:00:01.267) 0:00:35.401 ******* 2026-04-08 00:24:28.152676 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:24:28.152688 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:24:28.152694 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:24:28.152699 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:28.152705 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:28.152711 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:28.152717 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:28.152722 | orchestrator | 2026-04-08 00:24:28.152728 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:24:28.152768 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:24:28.152775 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:24:28.152782 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:24:28.152788 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:24:28.152794 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:24:28.152800 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:24:28.152806 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:24:28.152812 | orchestrator | 2026-04-08 00:24:28.152818 | orchestrator | 2026-04-08 00:24:28.152824 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:24:28.152830 | orchestrator | Wednesday 08 April 2026 00:24:28 +0000 (0:00:03.670) 0:00:39.072 ******* 2026-04-08 00:24:28.152836 | orchestrator | =============================================================================== 2026-04-08 00:24:28.152841 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.37s 2026-04-08 00:24:28.152847 | orchestrator | Install required packages (Debian) -------------------------------------- 7.60s 2026-04-08 00:24:28.152893 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.67s 2026-04-08 00:24:28.152899 | orchestrator | Copy fact files --------------------------------------------------------- 3.49s 2026-04-08 00:24:28.152905 | orchestrator | Create custom facts directory ------------------------------------------- 1.35s 2026-04-08 00:24:28.152911 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.27s 2026-04-08 00:24:28.152922 | orchestrator | Copy fact file ---------------------------------------------------------- 1.17s 2026-04-08 00:24:28.333303 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2026-04-08 00:24:28.333436 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.99s 2026-04-08 00:24:28.333453 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2026-04-08 00:24:28.333464 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-04-08 00:24:28.333475 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2026-04-08 00:24:28.333486 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.16s 2026-04-08 00:24:28.333497 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.16s 2026-04-08 00:24:28.333508 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2026-04-08 00:24:28.333520 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-04-08 00:24:28.333531 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.08s 2026-04-08 00:24:28.333542 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-04-08 00:24:28.516647 | orchestrator | + osism apply bootstrap 2026-04-08 00:24:39.831443 | orchestrator | 2026-04-08 00:24:39 | INFO  | Prepare task for execution of bootstrap. 2026-04-08 00:24:39.954945 | orchestrator | 2026-04-08 00:24:39 | INFO  | Task fba66006-100c-45d0-823a-190cdf6c037f (bootstrap) was prepared for execution. 2026-04-08 00:24:39.955040 | orchestrator | 2026-04-08 00:24:39 | INFO  | It takes a moment until task fba66006-100c-45d0-823a-190cdf6c037f (bootstrap) has been started and output is visible here. 2026-04-08 00:24:55.609663 | orchestrator | 2026-04-08 00:24:55.609769 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-08 00:24:55.609785 | orchestrator | 2026-04-08 00:24:55.609796 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-08 00:24:55.609806 | orchestrator | Wednesday 08 April 2026 00:24:43 +0000 (0:00:00.187) 0:00:00.187 ******* 2026-04-08 00:24:55.609816 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:55.609827 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:24:55.609927 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:24:55.609944 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:24:55.609961 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:55.609971 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:55.609981 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:55.609991 | orchestrator | 2026-04-08 00:24:55.610001 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-08 00:24:55.610011 | orchestrator | 2026-04-08 00:24:55.610081 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-08 00:24:55.610093 | orchestrator | Wednesday 08 April 2026 00:24:43 +0000 (0:00:00.316) 0:00:00.503 ******* 2026-04-08 00:24:55.610104 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:24:55.610116 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:24:55.610128 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:24:55.610139 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:55.610150 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:55.610161 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:55.610172 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:55.610183 | orchestrator | 2026-04-08 00:24:55.610194 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-08 00:24:55.610205 | orchestrator | 2026-04-08 00:24:55.610218 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-08 00:24:55.610231 | orchestrator | Wednesday 08 April 2026 00:24:48 +0000 (0:00:04.643) 0:00:05.147 ******* 2026-04-08 00:24:55.610245 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-08 00:24:55.610259 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-08 00:24:55.610272 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-08 00:24:55.610285 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-08 00:24:55.610297 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-08 00:24:55.610310 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-08 00:24:55.610322 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-08 00:24:55.610336 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-08 00:24:55.610348 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-08 00:24:55.610361 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-08 00:24:55.610373 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-08 00:24:55.610386 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-08 00:24:55.610399 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-08 00:24:55.610411 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-08 00:24:55.610423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-08 00:24:55.610436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-08 00:24:55.610477 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-08 00:24:55.610490 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:24:55.610503 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-08 00:24:55.610516 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-08 00:24:55.610529 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-08 00:24:55.610542 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-08 00:24:55.610555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-08 00:24:55.610567 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-08 00:24:55.610578 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-08 00:24:55.610589 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-08 00:24:55.610600 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-08 00:24:55.610624 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-08 00:24:55.610636 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:24:55.610647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-08 00:24:55.610658 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-08 00:24:55.610668 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-08 00:24:55.610679 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-08 00:24:55.610690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-08 00:24:55.610701 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-08 00:24:55.610712 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-08 00:24:55.610723 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-08 00:24:55.610734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:24:55.610744 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-08 00:24:55.610755 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-08 00:24:55.610766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:24:55.610777 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-08 00:24:55.610788 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-08 00:24:55.610799 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-08 00:24:55.610810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:24:55.610821 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:24:55.610912 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-08 00:24:55.610926 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:24:55.610937 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-08 00:24:55.610948 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-08 00:24:55.610958 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-08 00:24:55.610969 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:24:55.610980 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-08 00:24:55.610991 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-08 00:24:55.611001 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:24:55.611012 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:24:55.611023 | orchestrator | 2026-04-08 00:24:55.611034 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-08 00:24:55.611045 | orchestrator | 2026-04-08 00:24:55.611055 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-08 00:24:55.611066 | orchestrator | Wednesday 08 April 2026 00:24:48 +0000 (0:00:00.420) 0:00:05.567 ******* 2026-04-08 00:24:55.611077 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:55.611088 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:55.611107 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:55.611119 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:55.611129 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:24:55.611140 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:24:55.611151 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:24:55.611162 | orchestrator | 2026-04-08 00:24:55.611173 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-08 00:24:55.611184 | orchestrator | Wednesday 08 April 2026 00:24:49 +0000 (0:00:01.360) 0:00:06.927 ******* 2026-04-08 00:24:55.611195 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:55.611206 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:55.611216 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:24:55.611227 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:55.611238 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:24:55.611249 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:24:55.611259 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:55.611270 | orchestrator | 2026-04-08 00:24:55.611281 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-08 00:24:55.611292 | orchestrator | Wednesday 08 April 2026 00:24:51 +0000 (0:00:01.246) 0:00:08.174 ******* 2026-04-08 00:24:55.611304 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:24:55.611318 | orchestrator | 2026-04-08 00:24:55.611329 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-08 00:24:55.611340 | orchestrator | Wednesday 08 April 2026 00:24:51 +0000 (0:00:00.275) 0:00:08.450 ******* 2026-04-08 00:24:55.611351 | orchestrator | changed: [testbed-manager] 2026-04-08 00:24:55.611362 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:24:55.611373 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:24:55.611384 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:24:55.611395 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:24:55.611405 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:24:55.611416 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:24:55.611427 | orchestrator | 2026-04-08 00:24:55.611438 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-08 00:24:55.611449 | orchestrator | Wednesday 08 April 2026 00:24:53 +0000 (0:00:01.569) 0:00:10.019 ******* 2026-04-08 00:24:55.611460 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:24:55.611472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:24:55.611485 | orchestrator | 2026-04-08 00:24:55.611496 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-08 00:24:55.611507 | orchestrator | Wednesday 08 April 2026 00:24:53 +0000 (0:00:00.264) 0:00:10.284 ******* 2026-04-08 00:24:55.611518 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:24:55.611529 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:24:55.611539 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:24:55.611550 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:24:55.611561 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:24:55.611572 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:24:55.611583 | orchestrator | 2026-04-08 00:24:55.611593 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-08 00:24:55.611605 | orchestrator | Wednesday 08 April 2026 00:24:54 +0000 (0:00:01.075) 0:00:11.359 ******* 2026-04-08 00:24:55.611616 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:24:55.611627 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:24:55.611646 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:24:55.611657 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:24:55.611668 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:24:55.611679 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:24:55.611696 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:24:55.611707 | orchestrator | 2026-04-08 00:24:55.611718 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-08 00:24:55.611729 | orchestrator | Wednesday 08 April 2026 00:24:55 +0000 (0:00:00.698) 0:00:12.058 ******* 2026-04-08 00:24:55.611739 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:24:55.611750 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:24:55.611761 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:24:55.611772 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:24:55.611782 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:24:55.611793 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:24:55.611804 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:55.611815 | orchestrator | 2026-04-08 00:24:55.611826 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-08 00:24:55.611855 | orchestrator | Wednesday 08 April 2026 00:24:55 +0000 (0:00:00.432) 0:00:12.490 ******* 2026-04-08 00:24:55.611866 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:24:55.611877 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:24:55.611895 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:25:07.196167 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:25:07.196295 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:25:07.196322 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:25:07.196335 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:25:07.196347 | orchestrator | 2026-04-08 00:25:07.196359 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-08 00:25:07.196372 | orchestrator | Wednesday 08 April 2026 00:24:55 +0000 (0:00:00.200) 0:00:12.690 ******* 2026-04-08 00:25:07.196386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:25:07.196415 | orchestrator | 2026-04-08 00:25:07.196427 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-08 00:25:07.196439 | orchestrator | Wednesday 08 April 2026 00:24:55 +0000 (0:00:00.291) 0:00:12.981 ******* 2026-04-08 00:25:07.196450 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:25:07.196461 | orchestrator | 2026-04-08 00:25:07.196472 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-08 00:25:07.196483 | orchestrator | Wednesday 08 April 2026 00:24:56 +0000 (0:00:00.278) 0:00:13.260 ******* 2026-04-08 00:25:07.196494 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:07.196507 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:07.196518 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:07.196529 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:07.196540 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:07.196551 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:07.196561 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:07.196572 | orchestrator | 2026-04-08 00:25:07.196583 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-08 00:25:07.196595 | orchestrator | Wednesday 08 April 2026 00:24:57 +0000 (0:00:01.385) 0:00:14.646 ******* 2026-04-08 00:25:07.196607 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:25:07.196618 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:25:07.196629 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:25:07.196640 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:25:07.196651 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:25:07.196662 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:25:07.196676 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:25:07.196688 | orchestrator | 2026-04-08 00:25:07.196702 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-08 00:25:07.196740 | orchestrator | Wednesday 08 April 2026 00:24:57 +0000 (0:00:00.190) 0:00:14.836 ******* 2026-04-08 00:25:07.196754 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:07.196767 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:07.196780 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:07.196792 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:07.196805 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:07.196817 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:07.196889 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:07.196902 | orchestrator | 2026-04-08 00:25:07.196916 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-08 00:25:07.196929 | orchestrator | Wednesday 08 April 2026 00:24:58 +0000 (0:00:00.531) 0:00:15.368 ******* 2026-04-08 00:25:07.196942 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:25:07.196955 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:25:07.196968 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:25:07.196980 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:25:07.196993 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:25:07.197007 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:25:07.197019 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:25:07.197030 | orchestrator | 2026-04-08 00:25:07.197041 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-08 00:25:07.197053 | orchestrator | Wednesday 08 April 2026 00:24:58 +0000 (0:00:00.237) 0:00:15.605 ******* 2026-04-08 00:25:07.197064 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:07.197084 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:25:07.197095 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:25:07.197106 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:25:07.197117 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:25:07.197128 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:25:07.197139 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:25:07.197149 | orchestrator | 2026-04-08 00:25:07.197160 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-08 00:25:07.197171 | orchestrator | Wednesday 08 April 2026 00:24:59 +0000 (0:00:00.547) 0:00:16.153 ******* 2026-04-08 00:25:07.197182 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:07.197193 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:25:07.197204 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:25:07.197215 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:25:07.197225 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:25:07.197236 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:25:07.197247 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:25:07.197258 | orchestrator | 2026-04-08 00:25:07.197269 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-08 00:25:07.197283 | orchestrator | Wednesday 08 April 2026 00:25:00 +0000 (0:00:01.100) 0:00:17.253 ******* 2026-04-08 00:25:07.197302 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:07.197322 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:07.197343 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:07.197363 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:07.197383 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:07.197401 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:07.197420 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:07.197432 | orchestrator | 2026-04-08 00:25:07.197443 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-08 00:25:07.197454 | orchestrator | Wednesday 08 April 2026 00:25:01 +0000 (0:00:01.067) 0:00:18.321 ******* 2026-04-08 00:25:07.197486 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:25:07.197498 | orchestrator | 2026-04-08 00:25:07.197511 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-08 00:25:07.197529 | orchestrator | Wednesday 08 April 2026 00:25:01 +0000 (0:00:00.295) 0:00:18.616 ******* 2026-04-08 00:25:07.197560 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:25:07.197579 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:25:07.197599 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:25:07.197618 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:25:07.197637 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:25:07.197649 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:25:07.197659 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:25:07.197670 | orchestrator | 2026-04-08 00:25:07.197681 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-08 00:25:07.197692 | orchestrator | Wednesday 08 April 2026 00:25:02 +0000 (0:00:01.296) 0:00:19.912 ******* 2026-04-08 00:25:07.197703 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:07.197714 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:07.197725 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:07.197736 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:07.197746 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:07.197758 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:07.197777 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:07.197794 | orchestrator | 2026-04-08 00:25:07.197813 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-08 00:25:07.197856 | orchestrator | Wednesday 08 April 2026 00:25:03 +0000 (0:00:00.240) 0:00:20.152 ******* 2026-04-08 00:25:07.197876 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:07.197889 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:07.197900 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:07.197911 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:07.197922 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:07.197932 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:07.197943 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:07.197954 | orchestrator | 2026-04-08 00:25:07.197965 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-08 00:25:07.197976 | orchestrator | Wednesday 08 April 2026 00:25:03 +0000 (0:00:00.218) 0:00:20.371 ******* 2026-04-08 00:25:07.197986 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:07.197997 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:07.198008 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:07.198085 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:07.198105 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:07.198123 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:07.198140 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:07.198159 | orchestrator | 2026-04-08 00:25:07.198178 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-08 00:25:07.198197 | orchestrator | Wednesday 08 April 2026 00:25:03 +0000 (0:00:00.189) 0:00:20.561 ******* 2026-04-08 00:25:07.198217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:25:07.198238 | orchestrator | 2026-04-08 00:25:07.198257 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-08 00:25:07.198275 | orchestrator | Wednesday 08 April 2026 00:25:03 +0000 (0:00:00.267) 0:00:20.828 ******* 2026-04-08 00:25:07.198294 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:07.198307 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:07.198317 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:07.198328 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:07.198347 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:07.198365 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:07.198384 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:07.198404 | orchestrator | 2026-04-08 00:25:07.198421 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-08 00:25:07.198440 | orchestrator | Wednesday 08 April 2026 00:25:04 +0000 (0:00:00.581) 0:00:21.410 ******* 2026-04-08 00:25:07.198451 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:25:07.198472 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:25:07.198489 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:25:07.198507 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:25:07.198526 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:25:07.198545 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:25:07.198563 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:25:07.198582 | orchestrator | 2026-04-08 00:25:07.198594 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-08 00:25:07.198605 | orchestrator | Wednesday 08 April 2026 00:25:04 +0000 (0:00:00.234) 0:00:21.644 ******* 2026-04-08 00:25:07.198615 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:07.198626 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:25:07.198637 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:07.198648 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:25:07.198659 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:25:07.198669 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:07.198680 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:07.198691 | orchestrator | 2026-04-08 00:25:07.198702 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-08 00:25:07.198712 | orchestrator | Wednesday 08 April 2026 00:25:05 +0000 (0:00:00.979) 0:00:22.624 ******* 2026-04-08 00:25:07.198723 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:07.198734 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:07.198745 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:07.198755 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:07.198766 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:07.198777 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:07.198787 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:07.198798 | orchestrator | 2026-04-08 00:25:07.198809 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-08 00:25:07.198819 | orchestrator | Wednesday 08 April 2026 00:25:06 +0000 (0:00:00.562) 0:00:23.186 ******* 2026-04-08 00:25:07.198868 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:07.198879 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:07.198890 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:25:07.198901 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:07.198923 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:50.390186 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:25:50.390287 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:25:50.390296 | orchestrator | 2026-04-08 00:25:50.390302 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-08 00:25:50.390309 | orchestrator | Wednesday 08 April 2026 00:25:07 +0000 (0:00:01.024) 0:00:24.210 ******* 2026-04-08 00:25:50.390324 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:50.390337 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:50.390342 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:50.390347 | orchestrator | changed: [testbed-manager] 2026-04-08 00:25:50.390352 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:25:50.390357 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:25:50.390362 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:25:50.390367 | orchestrator | 2026-04-08 00:25:50.390372 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-08 00:25:50.390377 | orchestrator | Wednesday 08 April 2026 00:25:25 +0000 (0:00:17.829) 0:00:42.040 ******* 2026-04-08 00:25:50.390382 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:50.390387 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:50.390392 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:50.390396 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:50.390401 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:50.390405 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:50.390410 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:50.390414 | orchestrator | 2026-04-08 00:25:50.390419 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-08 00:25:50.390424 | orchestrator | Wednesday 08 April 2026 00:25:25 +0000 (0:00:00.238) 0:00:42.278 ******* 2026-04-08 00:25:50.390428 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:50.390450 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:50.390455 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:50.390459 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:50.390464 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:50.390468 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:50.390475 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:50.390483 | orchestrator | 2026-04-08 00:25:50.390490 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-08 00:25:50.390498 | orchestrator | Wednesday 08 April 2026 00:25:25 +0000 (0:00:00.221) 0:00:42.499 ******* 2026-04-08 00:25:50.390505 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:50.390512 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:50.390518 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:50.390525 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:50.390532 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:50.390540 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:50.390547 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:50.390554 | orchestrator | 2026-04-08 00:25:50.390562 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-08 00:25:50.390569 | orchestrator | Wednesday 08 April 2026 00:25:25 +0000 (0:00:00.232) 0:00:42.732 ******* 2026-04-08 00:25:50.390579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:25:50.390588 | orchestrator | 2026-04-08 00:25:50.390596 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-08 00:25:50.390603 | orchestrator | Wednesday 08 April 2026 00:25:26 +0000 (0:00:00.292) 0:00:43.024 ******* 2026-04-08 00:25:50.390610 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:50.390618 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:50.390626 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:50.390633 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:50.390641 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:50.390648 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:50.390655 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:50.390662 | orchestrator | 2026-04-08 00:25:50.390669 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-08 00:25:50.390677 | orchestrator | Wednesday 08 April 2026 00:25:27 +0000 (0:00:01.789) 0:00:44.813 ******* 2026-04-08 00:25:50.390683 | orchestrator | changed: [testbed-manager] 2026-04-08 00:25:50.390707 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:25:50.390716 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:25:50.390723 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:25:50.390731 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:25:50.390738 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:25:50.390750 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:25:50.390758 | orchestrator | 2026-04-08 00:25:50.390767 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-08 00:25:50.390775 | orchestrator | Wednesday 08 April 2026 00:25:29 +0000 (0:00:01.172) 0:00:45.986 ******* 2026-04-08 00:25:50.390782 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:50.390790 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:50.390817 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:50.390825 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:50.390832 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:50.390840 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:50.390847 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:50.390854 | orchestrator | 2026-04-08 00:25:50.390863 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-08 00:25:50.390869 | orchestrator | Wednesday 08 April 2026 00:25:29 +0000 (0:00:00.888) 0:00:46.875 ******* 2026-04-08 00:25:50.390876 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:25:50.390891 | orchestrator | 2026-04-08 00:25:50.390897 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-08 00:25:50.390903 | orchestrator | Wednesday 08 April 2026 00:25:30 +0000 (0:00:00.301) 0:00:47.176 ******* 2026-04-08 00:25:50.390909 | orchestrator | changed: [testbed-manager] 2026-04-08 00:25:50.390914 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:25:50.390919 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:25:50.390924 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:25:50.390929 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:25:50.390934 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:25:50.390939 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:25:50.390945 | orchestrator | 2026-04-08 00:25:50.390964 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-08 00:25:50.390970 | orchestrator | Wednesday 08 April 2026 00:25:31 +0000 (0:00:01.117) 0:00:48.294 ******* 2026-04-08 00:25:50.390975 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:25:50.390981 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:25:50.390986 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:25:50.390992 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:25:50.390997 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:25:50.391003 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:25:50.391008 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:25:50.391014 | orchestrator | 2026-04-08 00:25:50.391020 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-08 00:25:50.391028 | orchestrator | Wednesday 08 April 2026 00:25:31 +0000 (0:00:00.223) 0:00:48.517 ******* 2026-04-08 00:25:50.391038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:25:50.391050 | orchestrator | 2026-04-08 00:25:50.391057 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-08 00:25:50.391065 | orchestrator | Wednesday 08 April 2026 00:25:31 +0000 (0:00:00.292) 0:00:48.809 ******* 2026-04-08 00:25:50.391074 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:50.391083 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:50.391090 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:50.391097 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:50.391104 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:50.391110 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:50.391118 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:50.391125 | orchestrator | 2026-04-08 00:25:50.391132 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-08 00:25:50.391140 | orchestrator | Wednesday 08 April 2026 00:25:33 +0000 (0:00:01.862) 0:00:50.671 ******* 2026-04-08 00:25:50.391148 | orchestrator | changed: [testbed-manager] 2026-04-08 00:25:50.391155 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:25:50.391163 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:25:50.391171 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:25:50.391178 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:25:50.391185 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:25:50.391192 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:25:50.391197 | orchestrator | 2026-04-08 00:25:50.391201 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-08 00:25:50.391206 | orchestrator | Wednesday 08 April 2026 00:25:34 +0000 (0:00:01.232) 0:00:51.904 ******* 2026-04-08 00:25:50.391211 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:25:50.391215 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:25:50.391220 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:25:50.391224 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:25:50.391229 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:25:50.391233 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:25:50.391245 | orchestrator | changed: [testbed-manager] 2026-04-08 00:25:50.391256 | orchestrator | 2026-04-08 00:25:50.391265 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-08 00:25:50.391273 | orchestrator | Wednesday 08 April 2026 00:25:47 +0000 (0:00:12.112) 0:01:04.017 ******* 2026-04-08 00:25:50.391281 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:50.391287 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:50.391295 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:50.391302 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:50.391309 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:50.391316 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:50.391322 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:50.391329 | orchestrator | 2026-04-08 00:25:50.391336 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-08 00:25:50.391342 | orchestrator | Wednesday 08 April 2026 00:25:48 +0000 (0:00:01.634) 0:01:05.652 ******* 2026-04-08 00:25:50.391349 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:50.391356 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:50.391362 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:50.391369 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:50.391375 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:50.391382 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:50.391389 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:50.391396 | orchestrator | 2026-04-08 00:25:50.391408 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-08 00:25:50.391415 | orchestrator | Wednesday 08 April 2026 00:25:49 +0000 (0:00:00.939) 0:01:06.592 ******* 2026-04-08 00:25:50.391421 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:50.391428 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:50.391434 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:50.391441 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:50.391447 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:50.391454 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:50.391460 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:50.391467 | orchestrator | 2026-04-08 00:25:50.391473 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-08 00:25:50.391481 | orchestrator | Wednesday 08 April 2026 00:25:49 +0000 (0:00:00.246) 0:01:06.838 ******* 2026-04-08 00:25:50.391488 | orchestrator | ok: [testbed-manager] 2026-04-08 00:25:50.391495 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:25:50.391502 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:25:50.391508 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:25:50.391515 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:25:50.391521 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:25:50.391528 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:25:50.391535 | orchestrator | 2026-04-08 00:25:50.391542 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-08 00:25:50.391549 | orchestrator | Wednesday 08 April 2026 00:25:50 +0000 (0:00:00.218) 0:01:07.057 ******* 2026-04-08 00:25:50.391557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:25:50.391567 | orchestrator | 2026-04-08 00:25:50.391585 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-08 00:28:13.422864 | orchestrator | Wednesday 08 April 2026 00:25:50 +0000 (0:00:00.307) 0:01:07.364 ******* 2026-04-08 00:28:13.422948 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:13.422959 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:13.422966 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:13.422972 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:13.422978 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:13.422983 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:13.422989 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:13.422994 | orchestrator | 2026-04-08 00:28:13.423001 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-08 00:28:13.423021 | orchestrator | Wednesday 08 April 2026 00:25:52 +0000 (0:00:01.914) 0:01:09.279 ******* 2026-04-08 00:28:13.423027 | orchestrator | changed: [testbed-manager] 2026-04-08 00:28:13.423033 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:28:13.423039 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:28:13.423044 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:28:13.423050 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:28:13.423055 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:28:13.423060 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:28:13.423065 | orchestrator | 2026-04-08 00:28:13.423071 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-08 00:28:13.423077 | orchestrator | Wednesday 08 April 2026 00:25:53 +0000 (0:00:00.708) 0:01:09.987 ******* 2026-04-08 00:28:13.423083 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:13.423088 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:13.423094 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:13.423099 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:13.423105 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:13.423110 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:13.423115 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:13.423121 | orchestrator | 2026-04-08 00:28:13.423126 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-08 00:28:13.423132 | orchestrator | Wednesday 08 April 2026 00:25:53 +0000 (0:00:00.338) 0:01:10.326 ******* 2026-04-08 00:28:13.423137 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:13.423143 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:13.423151 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:13.423159 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:13.423168 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:13.423176 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:13.423184 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:13.423192 | orchestrator | 2026-04-08 00:28:13.423201 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-08 00:28:13.423209 | orchestrator | Wednesday 08 April 2026 00:25:54 +0000 (0:00:01.331) 0:01:11.657 ******* 2026-04-08 00:28:13.423217 | orchestrator | changed: [testbed-manager] 2026-04-08 00:28:13.423226 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:28:13.423234 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:28:13.423242 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:28:13.423250 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:28:13.423258 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:28:13.423266 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:28:13.423275 | orchestrator | 2026-04-08 00:28:13.423283 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-08 00:28:13.423291 | orchestrator | Wednesday 08 April 2026 00:25:56 +0000 (0:00:02.060) 0:01:13.717 ******* 2026-04-08 00:28:13.423300 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:13.423308 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:13.423318 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:13.423326 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:13.423334 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:13.423342 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:13.423350 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:13.423359 | orchestrator | 2026-04-08 00:28:13.423368 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-08 00:28:13.423375 | orchestrator | Wednesday 08 April 2026 00:25:59 +0000 (0:00:02.940) 0:01:16.658 ******* 2026-04-08 00:28:13.423381 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:13.423391 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:13.423399 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:13.423412 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:13.423418 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:13.423423 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:13.423430 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:13.423436 | orchestrator | 2026-04-08 00:28:13.423442 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-08 00:28:13.423465 | orchestrator | Wednesday 08 April 2026 00:26:38 +0000 (0:00:39.238) 0:01:55.897 ******* 2026-04-08 00:28:13.423472 | orchestrator | changed: [testbed-manager] 2026-04-08 00:28:13.423478 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:28:13.423485 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:28:13.423491 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:28:13.423497 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:28:13.423504 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:28:13.423510 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:28:13.423520 | orchestrator | 2026-04-08 00:28:13.423527 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-08 00:28:13.423533 | orchestrator | Wednesday 08 April 2026 00:27:59 +0000 (0:01:20.470) 0:03:16.367 ******* 2026-04-08 00:28:13.423539 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:13.423546 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:13.423552 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:13.423559 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:13.423565 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:13.423571 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:13.423578 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:13.423584 | orchestrator | 2026-04-08 00:28:13.423590 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-08 00:28:13.423598 | orchestrator | Wednesday 08 April 2026 00:28:01 +0000 (0:00:01.981) 0:03:18.348 ******* 2026-04-08 00:28:13.423604 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:13.423610 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:13.423617 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:13.423623 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:13.423629 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:13.423635 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:13.423641 | orchestrator | changed: [testbed-manager] 2026-04-08 00:28:13.423647 | orchestrator | 2026-04-08 00:28:13.423654 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-08 00:28:13.423660 | orchestrator | Wednesday 08 April 2026 00:28:12 +0000 (0:00:10.918) 0:03:29.266 ******* 2026-04-08 00:28:13.423690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-08 00:28:13.423721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-08 00:28:13.423730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-08 00:28:13.423738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-08 00:28:13.423749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-08 00:28:13.423755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-08 00:28:13.423765 | orchestrator | 2026-04-08 00:28:13.423772 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-08 00:28:13.423779 | orchestrator | Wednesday 08 April 2026 00:28:12 +0000 (0:00:00.426) 0:03:29.693 ******* 2026-04-08 00:28:13.423785 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-08 00:28:13.423792 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:13.423799 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-08 00:28:13.423806 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-08 00:28:13.423814 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:28:13.423821 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:28:13.423828 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-08 00:28:13.423834 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:28:13.423840 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-08 00:28:13.423846 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-08 00:28:13.423852 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-08 00:28:13.423858 | orchestrator | 2026-04-08 00:28:13.423863 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-08 00:28:13.423869 | orchestrator | Wednesday 08 April 2026 00:28:13 +0000 (0:00:00.622) 0:03:30.316 ******* 2026-04-08 00:28:13.423880 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-08 00:28:13.423888 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-08 00:28:13.423894 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-08 00:28:13.423899 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-08 00:28:13.423905 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-08 00:28:13.423916 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-08 00:28:20.570211 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-08 00:28:20.570353 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-08 00:28:20.570382 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-08 00:28:20.570403 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-08 00:28:20.570423 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:20.570445 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-08 00:28:20.570464 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-08 00:28:20.570483 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-08 00:28:20.570536 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-08 00:28:20.570555 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-08 00:28:20.570567 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-08 00:28:20.570578 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-08 00:28:20.570588 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-08 00:28:20.570598 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-08 00:28:20.570611 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-08 00:28:20.570625 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-08 00:28:20.570638 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-08 00:28:20.570651 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-08 00:28:20.570663 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-08 00:28:20.570676 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-08 00:28:20.570728 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-08 00:28:20.570748 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-08 00:28:20.570761 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-08 00:28:20.570773 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-08 00:28:20.570786 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-08 00:28:20.570798 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:28:20.570811 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-08 00:28:20.570823 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-08 00:28:20.570835 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-08 00:28:20.570862 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-08 00:28:20.570875 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:28:20.570888 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-08 00:28:20.570900 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-08 00:28:20.570912 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-08 00:28:20.570925 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-08 00:28:20.570937 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-08 00:28:20.570949 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-08 00:28:20.570962 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:28:20.570974 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-08 00:28:20.570985 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-08 00:28:20.570995 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-08 00:28:20.571015 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-08 00:28:20.571026 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-08 00:28:20.571058 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-08 00:28:20.571069 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-08 00:28:20.571080 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-08 00:28:20.571091 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-08 00:28:20.571101 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-08 00:28:20.571116 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-08 00:28:20.571135 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-08 00:28:20.571153 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-08 00:28:20.571165 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-08 00:28:20.571177 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-08 00:28:20.571187 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-08 00:28:20.571198 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-08 00:28:20.571209 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-08 00:28:20.571219 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-08 00:28:20.571230 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-08 00:28:20.571241 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-08 00:28:20.571251 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-08 00:28:20.571262 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-08 00:28:20.571273 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-08 00:28:20.571283 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-08 00:28:20.571294 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-08 00:28:20.571305 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-08 00:28:20.571315 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-08 00:28:20.571326 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-08 00:28:20.571337 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-08 00:28:20.571348 | orchestrator | 2026-04-08 00:28:20.571359 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-08 00:28:20.571371 | orchestrator | Wednesday 08 April 2026 00:28:18 +0000 (0:00:05.052) 0:03:35.368 ******* 2026-04-08 00:28:20.571381 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-08 00:28:20.571392 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-08 00:28:20.571403 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-08 00:28:20.571419 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-08 00:28:20.571438 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-08 00:28:20.571449 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-08 00:28:20.571460 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-08 00:28:20.571471 | orchestrator | 2026-04-08 00:28:20.571481 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-08 00:28:20.571492 | orchestrator | Wednesday 08 April 2026 00:28:19 +0000 (0:00:01.558) 0:03:36.927 ******* 2026-04-08 00:28:20.571503 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:28:20.571514 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:20.571524 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:28:20.571535 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:28:20.571546 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:28:20.571557 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:28:20.571567 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:28:20.571578 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:28:20.571589 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-08 00:28:20.571600 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-08 00:28:20.571618 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-08 00:28:33.977736 | orchestrator | 2026-04-08 00:28:33.977896 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-08 00:28:33.977931 | orchestrator | Wednesday 08 April 2026 00:28:20 +0000 (0:00:00.644) 0:03:37.571 ******* 2026-04-08 00:28:33.977953 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:28:33.977975 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:33.977997 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:28:33.978076 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:28:33.978099 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:28:33.978121 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:28:33.978142 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:28:33.978162 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:28:33.978183 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-08 00:28:33.978204 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-08 00:28:33.978224 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-08 00:28:33.978246 | orchestrator | 2026-04-08 00:28:33.978268 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-08 00:28:33.978289 | orchestrator | Wednesday 08 April 2026 00:28:21 +0000 (0:00:00.545) 0:03:38.117 ******* 2026-04-08 00:28:33.978309 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-08 00:28:33.978330 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:33.978351 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-08 00:28:33.978371 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:28:33.978393 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-08 00:28:33.978447 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-08 00:28:33.978469 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:28:33.978490 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:28:33.978511 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-08 00:28:33.978531 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-08 00:28:33.978553 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-08 00:28:33.978574 | orchestrator | 2026-04-08 00:28:33.978595 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-08 00:28:33.978615 | orchestrator | Wednesday 08 April 2026 00:28:22 +0000 (0:00:01.645) 0:03:39.762 ******* 2026-04-08 00:28:33.978635 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:33.978654 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:28:33.978674 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:28:33.978727 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:28:33.978747 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:28:33.978767 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:28:33.978787 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:28:33.978806 | orchestrator | 2026-04-08 00:28:33.978825 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-08 00:28:33.978845 | orchestrator | Wednesday 08 April 2026 00:28:23 +0000 (0:00:00.251) 0:03:40.014 ******* 2026-04-08 00:28:33.978865 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:33.978884 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:33.978903 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:33.978922 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:33.978941 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:33.978960 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:33.978980 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:33.978998 | orchestrator | 2026-04-08 00:28:33.979017 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-08 00:28:33.979036 | orchestrator | Wednesday 08 April 2026 00:28:28 +0000 (0:00:05.358) 0:03:45.372 ******* 2026-04-08 00:28:33.979056 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-08 00:28:33.979076 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-08 00:28:33.979095 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:33.979147 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-08 00:28:33.979167 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:28:33.979187 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-08 00:28:33.979206 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:28:33.979225 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-08 00:28:33.979245 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:28:33.979264 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-08 00:28:33.979283 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:28:33.979302 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:28:33.979320 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-08 00:28:33.979339 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:28:33.979358 | orchestrator | 2026-04-08 00:28:33.979378 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-08 00:28:33.979397 | orchestrator | Wednesday 08 April 2026 00:28:28 +0000 (0:00:00.301) 0:03:45.673 ******* 2026-04-08 00:28:33.979417 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-08 00:28:33.979436 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-08 00:28:33.979456 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-08 00:28:33.979503 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-08 00:28:33.979523 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-08 00:28:33.979542 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-08 00:28:33.979580 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-08 00:28:33.979600 | orchestrator | 2026-04-08 00:28:33.979618 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-08 00:28:33.979638 | orchestrator | Wednesday 08 April 2026 00:28:29 +0000 (0:00:01.120) 0:03:46.794 ******* 2026-04-08 00:28:33.979660 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:28:33.979766 | orchestrator | 2026-04-08 00:28:33.979790 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-08 00:28:33.979810 | orchestrator | Wednesday 08 April 2026 00:28:30 +0000 (0:00:00.389) 0:03:47.183 ******* 2026-04-08 00:28:33.979821 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:33.979832 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:33.979843 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:33.979854 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:33.979865 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:33.979876 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:33.979886 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:33.979896 | orchestrator | 2026-04-08 00:28:33.979906 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-08 00:28:33.979916 | orchestrator | Wednesday 08 April 2026 00:28:31 +0000 (0:00:01.311) 0:03:48.494 ******* 2026-04-08 00:28:33.979925 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:33.979935 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:33.979944 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:33.979954 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:33.979963 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:33.979972 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:33.979982 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:33.979991 | orchestrator | 2026-04-08 00:28:33.980001 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-08 00:28:33.980010 | orchestrator | Wednesday 08 April 2026 00:28:32 +0000 (0:00:00.637) 0:03:49.132 ******* 2026-04-08 00:28:33.980020 | orchestrator | changed: [testbed-manager] 2026-04-08 00:28:33.980030 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:28:33.980039 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:28:33.980049 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:28:33.980058 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:28:33.980068 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:28:33.980077 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:28:33.980087 | orchestrator | 2026-04-08 00:28:33.980096 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-08 00:28:33.980106 | orchestrator | Wednesday 08 April 2026 00:28:32 +0000 (0:00:00.632) 0:03:49.765 ******* 2026-04-08 00:28:33.980116 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:33.980145 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:33.980155 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:33.980164 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:33.980174 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:33.980184 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:33.980193 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:33.980203 | orchestrator | 2026-04-08 00:28:33.980212 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-08 00:28:33.980222 | orchestrator | Wednesday 08 April 2026 00:28:33 +0000 (0:00:00.620) 0:03:50.385 ******* 2026-04-08 00:28:33.980241 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775606706.92148, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:28:33.980264 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775606733.777275, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:28:33.980275 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775606740.1502635, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:28:33.980311 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775606721.510235, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:28:39.558088 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775606730.0767355, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:28:39.558232 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775606723.62043, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:28:39.558263 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775606730.4342215, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:28:39.558308 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:28:39.558353 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:28:39.558366 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:28:39.558378 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:28:39.558422 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:28:39.558435 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:28:39.558446 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:28:39.558458 | orchestrator | 2026-04-08 00:28:39.558471 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-08 00:28:39.558485 | orchestrator | Wednesday 08 April 2026 00:28:34 +0000 (0:00:00.965) 0:03:51.351 ******* 2026-04-08 00:28:39.558496 | orchestrator | changed: [testbed-manager] 2026-04-08 00:28:39.558508 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:28:39.558519 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:28:39.558538 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:28:39.558551 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:28:39.558564 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:28:39.558577 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:28:39.558590 | orchestrator | 2026-04-08 00:28:39.558602 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-08 00:28:39.558618 | orchestrator | Wednesday 08 April 2026 00:28:35 +0000 (0:00:01.203) 0:03:52.555 ******* 2026-04-08 00:28:39.558638 | orchestrator | changed: [testbed-manager] 2026-04-08 00:28:39.558656 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:28:39.558703 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:28:39.558732 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:28:39.558752 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:28:39.558771 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:28:39.558791 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:28:39.558812 | orchestrator | 2026-04-08 00:28:39.558831 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-08 00:28:39.558852 | orchestrator | Wednesday 08 April 2026 00:28:36 +0000 (0:00:01.272) 0:03:53.827 ******* 2026-04-08 00:28:39.558865 | orchestrator | changed: [testbed-manager] 2026-04-08 00:28:39.558878 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:28:39.558890 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:28:39.558902 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:28:39.558914 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:28:39.558926 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:28:39.558937 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:28:39.558948 | orchestrator | 2026-04-08 00:28:39.558958 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-08 00:28:39.558969 | orchestrator | Wednesday 08 April 2026 00:28:38 +0000 (0:00:01.262) 0:03:55.090 ******* 2026-04-08 00:28:39.558981 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:39.558992 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:28:39.559002 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:28:39.559013 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:28:39.559023 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:28:39.559034 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:28:39.559044 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:28:39.559055 | orchestrator | 2026-04-08 00:28:39.559066 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-08 00:28:39.559077 | orchestrator | Wednesday 08 April 2026 00:28:38 +0000 (0:00:00.285) 0:03:55.375 ******* 2026-04-08 00:28:39.559087 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:39.559099 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:39.559110 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:39.559121 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:39.559131 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:39.559142 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:39.559152 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:39.559163 | orchestrator | 2026-04-08 00:28:39.559174 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-08 00:28:39.559185 | orchestrator | Wednesday 08 April 2026 00:28:39 +0000 (0:00:00.706) 0:03:56.082 ******* 2026-04-08 00:28:39.559199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:28:39.559212 | orchestrator | 2026-04-08 00:28:39.559223 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-08 00:28:39.559244 | orchestrator | Wednesday 08 April 2026 00:28:39 +0000 (0:00:00.453) 0:03:56.535 ******* 2026-04-08 00:29:58.477831 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:58.477986 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:58.478003 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:58.478180 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:58.478240 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:58.478261 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:58.478281 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:58.478301 | orchestrator | 2026-04-08 00:29:58.478322 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-08 00:29:58.478344 | orchestrator | Wednesday 08 April 2026 00:28:48 +0000 (0:00:09.236) 0:04:05.772 ******* 2026-04-08 00:29:58.478363 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:58.478383 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:29:58.478402 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:29:58.478421 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:29:58.478440 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:29:58.478458 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:29:58.478477 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:29:58.478496 | orchestrator | 2026-04-08 00:29:58.478516 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-08 00:29:58.478538 | orchestrator | Wednesday 08 April 2026 00:28:50 +0000 (0:00:01.674) 0:04:07.447 ******* 2026-04-08 00:29:58.478557 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:58.478574 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:29:58.478593 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:29:58.478644 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:29:58.478666 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:29:58.478684 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:29:58.478703 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:29:58.478720 | orchestrator | 2026-04-08 00:29:58.478738 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-08 00:29:58.478756 | orchestrator | Wednesday 08 April 2026 00:28:51 +0000 (0:00:01.086) 0:04:08.534 ******* 2026-04-08 00:29:58.478775 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:58.478794 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:29:58.478813 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:29:58.478832 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:29:58.478851 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:29:58.478869 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:29:58.478888 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:29:58.478906 | orchestrator | 2026-04-08 00:29:58.478926 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-08 00:29:58.478947 | orchestrator | Wednesday 08 April 2026 00:28:51 +0000 (0:00:00.304) 0:04:08.838 ******* 2026-04-08 00:29:58.478965 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:58.478983 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:29:58.479003 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:29:58.479020 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:29:58.479039 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:29:58.479051 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:29:58.479061 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:29:58.479072 | orchestrator | 2026-04-08 00:29:58.479083 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-08 00:29:58.479095 | orchestrator | Wednesday 08 April 2026 00:28:52 +0000 (0:00:00.326) 0:04:09.165 ******* 2026-04-08 00:29:58.479106 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:58.479116 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:29:58.479127 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:29:58.479138 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:29:58.479149 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:29:58.479160 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:29:58.479171 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:29:58.479182 | orchestrator | 2026-04-08 00:29:58.479193 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-08 00:29:58.479204 | orchestrator | Wednesday 08 April 2026 00:28:52 +0000 (0:00:00.290) 0:04:09.456 ******* 2026-04-08 00:29:58.479215 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:29:58.479226 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:29:58.479237 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:29:58.479264 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:29:58.479274 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:29:58.479285 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:29:58.479296 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:58.479306 | orchestrator | 2026-04-08 00:29:58.479317 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-08 00:29:58.479334 | orchestrator | Wednesday 08 April 2026 00:28:57 +0000 (0:00:04.759) 0:04:14.215 ******* 2026-04-08 00:29:58.479355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:29:58.479377 | orchestrator | 2026-04-08 00:29:58.479395 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-08 00:29:58.479415 | orchestrator | Wednesday 08 April 2026 00:28:57 +0000 (0:00:00.419) 0:04:14.634 ******* 2026-04-08 00:29:58.479434 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-08 00:29:58.479451 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-08 00:29:58.479472 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-08 00:29:58.479489 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-08 00:29:58.479509 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:29:58.479528 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:29:58.479546 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-08 00:29:58.479565 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-08 00:29:58.479583 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-08 00:29:58.479603 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-08 00:29:58.479681 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:29:58.479700 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-08 00:29:58.479717 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-08 00:29:58.479732 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:29:58.479744 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-08 00:29:58.479755 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-08 00:29:58.479799 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:29:58.479817 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:29:58.479838 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-08 00:29:58.479855 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-08 00:29:58.479873 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:29:58.479890 | orchestrator | 2026-04-08 00:29:58.479908 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-08 00:29:58.479925 | orchestrator | Wednesday 08 April 2026 00:28:58 +0000 (0:00:00.376) 0:04:15.011 ******* 2026-04-08 00:29:58.479943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:29:58.479962 | orchestrator | 2026-04-08 00:29:58.479979 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-08 00:29:58.479997 | orchestrator | Wednesday 08 April 2026 00:28:58 +0000 (0:00:00.580) 0:04:15.592 ******* 2026-04-08 00:29:58.480014 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-08 00:29:58.480031 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-08 00:29:58.480050 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:29:58.480070 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-08 00:29:58.480088 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:29:58.480106 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-08 00:29:58.480125 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:29:58.480157 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-08 00:29:58.480200 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:29:58.480237 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-08 00:29:58.480257 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:29:58.480275 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:29:58.480294 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-08 00:29:58.480313 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:29:58.480332 | orchestrator | 2026-04-08 00:29:58.480351 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-08 00:29:58.480370 | orchestrator | Wednesday 08 April 2026 00:28:58 +0000 (0:00:00.357) 0:04:15.949 ******* 2026-04-08 00:29:58.480420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:29:58.480441 | orchestrator | 2026-04-08 00:29:58.480459 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-08 00:29:58.480478 | orchestrator | Wednesday 08 April 2026 00:28:59 +0000 (0:00:00.391) 0:04:16.341 ******* 2026-04-08 00:29:58.480495 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:58.480509 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:58.480528 | orchestrator | changed: [testbed-manager] 2026-04-08 00:29:58.480546 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:58.480565 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:58.480584 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:58.480603 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:58.480682 | orchestrator | 2026-04-08 00:29:58.480701 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-08 00:29:58.480722 | orchestrator | Wednesday 08 April 2026 00:29:32 +0000 (0:00:33.560) 0:04:49.901 ******* 2026-04-08 00:29:58.480741 | orchestrator | changed: [testbed-manager] 2026-04-08 00:29:58.480757 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:58.480768 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:58.480779 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:58.480789 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:58.480800 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:58.480811 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:58.480822 | orchestrator | 2026-04-08 00:29:58.480832 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-08 00:29:58.480843 | orchestrator | Wednesday 08 April 2026 00:29:41 +0000 (0:00:08.350) 0:04:58.252 ******* 2026-04-08 00:29:58.480854 | orchestrator | changed: [testbed-manager] 2026-04-08 00:29:58.480864 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:58.480875 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:58.480888 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:58.480907 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:58.480925 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:58.480942 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:58.480960 | orchestrator | 2026-04-08 00:29:58.480978 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-08 00:29:58.480997 | orchestrator | Wednesday 08 April 2026 00:29:49 +0000 (0:00:08.512) 0:05:06.765 ******* 2026-04-08 00:29:58.481017 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:58.481035 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:29:58.481054 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:29:58.481073 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:29:58.481091 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:29:58.481109 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:29:58.481128 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:29:58.481148 | orchestrator | 2026-04-08 00:29:58.481166 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-08 00:29:58.481201 | orchestrator | Wednesday 08 April 2026 00:29:51 +0000 (0:00:01.889) 0:05:08.655 ******* 2026-04-08 00:29:58.481222 | orchestrator | changed: [testbed-manager] 2026-04-08 00:29:58.481240 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:58.481259 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:58.481275 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:58.481292 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:58.481310 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:58.481327 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:58.481346 | orchestrator | 2026-04-08 00:29:58.481385 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-08 00:30:10.073009 | orchestrator | Wednesday 08 April 2026 00:29:58 +0000 (0:00:06.795) 0:05:15.451 ******* 2026-04-08 00:30:10.073105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:30:10.073118 | orchestrator | 2026-04-08 00:30:10.073128 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-08 00:30:10.073134 | orchestrator | Wednesday 08 April 2026 00:29:58 +0000 (0:00:00.417) 0:05:15.869 ******* 2026-04-08 00:30:10.073139 | orchestrator | changed: [testbed-manager] 2026-04-08 00:30:10.073145 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:30:10.073150 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:30:10.073154 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:30:10.073203 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:30:10.073208 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:30:10.073213 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:30:10.073218 | orchestrator | 2026-04-08 00:30:10.073222 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-08 00:30:10.073227 | orchestrator | Wednesday 08 April 2026 00:29:59 +0000 (0:00:00.745) 0:05:16.614 ******* 2026-04-08 00:30:10.073232 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:10.073237 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:10.073241 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:10.073246 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:10.073250 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:10.073255 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:10.073259 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:10.073266 | orchestrator | 2026-04-08 00:30:10.073273 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-08 00:30:10.073280 | orchestrator | Wednesday 08 April 2026 00:30:01 +0000 (0:00:01.816) 0:05:18.430 ******* 2026-04-08 00:30:10.073287 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:30:10.073294 | orchestrator | changed: [testbed-manager] 2026-04-08 00:30:10.073304 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:30:10.073311 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:30:10.073318 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:30:10.073325 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:30:10.073332 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:30:10.073338 | orchestrator | 2026-04-08 00:30:10.073345 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-08 00:30:10.073352 | orchestrator | Wednesday 08 April 2026 00:30:02 +0000 (0:00:00.812) 0:05:19.243 ******* 2026-04-08 00:30:10.073359 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:30:10.073365 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:30:10.073372 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:30:10.073379 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:30:10.073385 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:30:10.073393 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:30:10.073409 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:30:10.073416 | orchestrator | 2026-04-08 00:30:10.073430 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-08 00:30:10.073454 | orchestrator | Wednesday 08 April 2026 00:30:02 +0000 (0:00:00.265) 0:05:19.508 ******* 2026-04-08 00:30:10.073484 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:30:10.073491 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:30:10.073498 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:30:10.073504 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:30:10.073511 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:30:10.073519 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:30:10.073525 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:30:10.073532 | orchestrator | 2026-04-08 00:30:10.073539 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-08 00:30:10.073547 | orchestrator | Wednesday 08 April 2026 00:30:02 +0000 (0:00:00.383) 0:05:19.892 ******* 2026-04-08 00:30:10.073554 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:10.073563 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:10.073570 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:10.073577 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:10.073585 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:10.073592 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:10.073600 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:10.073658 | orchestrator | 2026-04-08 00:30:10.073665 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-08 00:30:10.073672 | orchestrator | Wednesday 08 April 2026 00:30:03 +0000 (0:00:00.434) 0:05:20.326 ******* 2026-04-08 00:30:10.073678 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:30:10.073684 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:30:10.073690 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:30:10.073697 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:30:10.073704 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:30:10.073712 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:30:10.073718 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:30:10.073724 | orchestrator | 2026-04-08 00:30:10.073731 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-08 00:30:10.073739 | orchestrator | Wednesday 08 April 2026 00:30:03 +0000 (0:00:00.268) 0:05:20.595 ******* 2026-04-08 00:30:10.073746 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:10.073753 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:10.073759 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:10.073766 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:10.073773 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:10.073780 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:10.073787 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:10.073794 | orchestrator | 2026-04-08 00:30:10.073801 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-08 00:30:10.073808 | orchestrator | Wednesday 08 April 2026 00:30:03 +0000 (0:00:00.311) 0:05:20.907 ******* 2026-04-08 00:30:10.073815 | orchestrator | ok: [testbed-manager] =>  2026-04-08 00:30:10.073823 | orchestrator |  docker_version: 5:27.5.1 2026-04-08 00:30:10.073829 | orchestrator | ok: [testbed-node-0] =>  2026-04-08 00:30:10.073837 | orchestrator |  docker_version: 5:27.5.1 2026-04-08 00:30:10.073843 | orchestrator | ok: [testbed-node-1] =>  2026-04-08 00:30:10.073850 | orchestrator |  docker_version: 5:27.5.1 2026-04-08 00:30:10.073858 | orchestrator | ok: [testbed-node-2] =>  2026-04-08 00:30:10.073865 | orchestrator |  docker_version: 5:27.5.1 2026-04-08 00:30:10.073893 | orchestrator | ok: [testbed-node-3] =>  2026-04-08 00:30:10.073900 | orchestrator |  docker_version: 5:27.5.1 2026-04-08 00:30:10.073907 | orchestrator | ok: [testbed-node-4] =>  2026-04-08 00:30:10.073915 | orchestrator |  docker_version: 5:27.5.1 2026-04-08 00:30:10.073921 | orchestrator | ok: [testbed-node-5] =>  2026-04-08 00:30:10.073929 | orchestrator |  docker_version: 5:27.5.1 2026-04-08 00:30:10.073936 | orchestrator | 2026-04-08 00:30:10.073943 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-08 00:30:10.073950 | orchestrator | Wednesday 08 April 2026 00:30:04 +0000 (0:00:00.253) 0:05:21.161 ******* 2026-04-08 00:30:10.073957 | orchestrator | ok: [testbed-manager] =>  2026-04-08 00:30:10.073974 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-08 00:30:10.073981 | orchestrator | ok: [testbed-node-0] =>  2026-04-08 00:30:10.073988 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-08 00:30:10.073995 | orchestrator | ok: [testbed-node-1] =>  2026-04-08 00:30:10.074002 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-08 00:30:10.074009 | orchestrator | ok: [testbed-node-2] =>  2026-04-08 00:30:10.074074 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-08 00:30:10.074083 | orchestrator | ok: [testbed-node-3] =>  2026-04-08 00:30:10.074090 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-08 00:30:10.074097 | orchestrator | ok: [testbed-node-4] =>  2026-04-08 00:30:10.074103 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-08 00:30:10.074110 | orchestrator | ok: [testbed-node-5] =>  2026-04-08 00:30:10.074117 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-08 00:30:10.074124 | orchestrator | 2026-04-08 00:30:10.074132 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-08 00:30:10.074139 | orchestrator | Wednesday 08 April 2026 00:30:04 +0000 (0:00:00.275) 0:05:21.437 ******* 2026-04-08 00:30:10.074146 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:30:10.074153 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:30:10.074159 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:30:10.074166 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:30:10.074172 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:30:10.074180 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:30:10.074187 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:30:10.074194 | orchestrator | 2026-04-08 00:30:10.074201 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-08 00:30:10.074209 | orchestrator | Wednesday 08 April 2026 00:30:04 +0000 (0:00:00.263) 0:05:21.700 ******* 2026-04-08 00:30:10.074216 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:30:10.074223 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:30:10.074230 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:30:10.074237 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:30:10.074246 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:30:10.074253 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:30:10.074260 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:30:10.074267 | orchestrator | 2026-04-08 00:30:10.074274 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-08 00:30:10.074281 | orchestrator | Wednesday 08 April 2026 00:30:04 +0000 (0:00:00.280) 0:05:21.980 ******* 2026-04-08 00:30:10.074299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:30:10.074309 | orchestrator | 2026-04-08 00:30:10.074316 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-08 00:30:10.074324 | orchestrator | Wednesday 08 April 2026 00:30:05 +0000 (0:00:00.417) 0:05:22.397 ******* 2026-04-08 00:30:10.074332 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:10.074339 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:10.074346 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:10.074354 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:10.074360 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:10.074368 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:10.074375 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:10.074381 | orchestrator | 2026-04-08 00:30:10.074388 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-08 00:30:10.074396 | orchestrator | Wednesday 08 April 2026 00:30:06 +0000 (0:00:00.989) 0:05:23.387 ******* 2026-04-08 00:30:10.074403 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:10.074410 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:10.074418 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:10.074425 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:10.074431 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:10.074446 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:10.074453 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:10.074460 | orchestrator | 2026-04-08 00:30:10.074467 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-08 00:30:10.074474 | orchestrator | Wednesday 08 April 2026 00:30:09 +0000 (0:00:03.278) 0:05:26.665 ******* 2026-04-08 00:30:10.074481 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-08 00:30:10.074488 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-08 00:30:10.074496 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-08 00:30:10.074503 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-08 00:30:10.074510 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-08 00:30:10.074518 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-08 00:30:10.074524 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:30:10.074531 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-08 00:30:10.074538 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-08 00:30:10.074545 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-08 00:30:10.074552 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:30:10.074558 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-08 00:30:10.074565 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-08 00:30:10.074571 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-08 00:30:10.074579 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:30:10.074586 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-08 00:30:10.074646 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-08 00:31:13.678480 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-08 00:31:13.678628 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:31:13.678651 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-08 00:31:13.678663 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:31:13.678674 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-08 00:31:13.678685 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-08 00:31:13.678696 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:31:13.678707 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-08 00:31:13.678718 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-08 00:31:13.678729 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-08 00:31:13.678740 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:31:13.678752 | orchestrator | 2026-04-08 00:31:13.678764 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-08 00:31:13.678776 | orchestrator | Wednesday 08 April 2026 00:30:10 +0000 (0:00:00.612) 0:05:27.278 ******* 2026-04-08 00:31:13.678787 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:13.678798 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:13.678809 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:13.678820 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:13.678831 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:13.678841 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:13.678852 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:13.678863 | orchestrator | 2026-04-08 00:31:13.678874 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-08 00:31:13.678885 | orchestrator | Wednesday 08 April 2026 00:30:17 +0000 (0:00:07.620) 0:05:34.898 ******* 2026-04-08 00:31:13.678895 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:13.678906 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:13.678917 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:13.678928 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:13.678938 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:13.678949 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:13.678983 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:13.678995 | orchestrator | 2026-04-08 00:31:13.679006 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-08 00:31:13.679020 | orchestrator | Wednesday 08 April 2026 00:30:19 +0000 (0:00:01.092) 0:05:35.990 ******* 2026-04-08 00:31:13.679033 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:13.679046 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:13.679058 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:13.679070 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:13.679083 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:13.679096 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:13.679108 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:13.679121 | orchestrator | 2026-04-08 00:31:13.679134 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-08 00:31:13.679146 | orchestrator | Wednesday 08 April 2026 00:30:27 +0000 (0:00:08.992) 0:05:44.983 ******* 2026-04-08 00:31:13.679159 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:13.679171 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:13.679199 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:13.679213 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:13.679225 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:13.679237 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:13.679250 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:13.679263 | orchestrator | 2026-04-08 00:31:13.679276 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-08 00:31:13.679288 | orchestrator | Wednesday 08 April 2026 00:30:31 +0000 (0:00:03.485) 0:05:48.468 ******* 2026-04-08 00:31:13.679300 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:13.679313 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:13.679325 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:13.679338 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:13.679350 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:13.679363 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:13.679374 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:13.679385 | orchestrator | 2026-04-08 00:31:13.679396 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-08 00:31:13.679407 | orchestrator | Wednesday 08 April 2026 00:30:32 +0000 (0:00:01.342) 0:05:49.811 ******* 2026-04-08 00:31:13.679417 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:13.679428 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:13.679439 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:13.679449 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:13.679460 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:13.679470 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:13.679481 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:13.679507 | orchestrator | 2026-04-08 00:31:13.679527 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-08 00:31:13.679538 | orchestrator | Wednesday 08 April 2026 00:30:34 +0000 (0:00:01.358) 0:05:51.170 ******* 2026-04-08 00:31:13.679567 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:31:13.679578 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:31:13.679589 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:31:13.679600 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:31:13.679611 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:31:13.679621 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:31:13.679632 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:13.679643 | orchestrator | 2026-04-08 00:31:13.679654 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-08 00:31:13.679665 | orchestrator | Wednesday 08 April 2026 00:30:34 +0000 (0:00:00.584) 0:05:51.755 ******* 2026-04-08 00:31:13.679676 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:13.679686 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:13.679697 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:13.679716 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:13.679727 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:13.679737 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:13.679748 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:13.679759 | orchestrator | 2026-04-08 00:31:13.679770 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-08 00:31:13.679798 | orchestrator | Wednesday 08 April 2026 00:30:44 +0000 (0:00:10.219) 0:06:01.975 ******* 2026-04-08 00:31:13.679810 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:13.679820 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:13.679831 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:13.679841 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:13.679852 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:13.679863 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:13.679873 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:13.679884 | orchestrator | 2026-04-08 00:31:13.679895 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-08 00:31:13.679906 | orchestrator | Wednesday 08 April 2026 00:30:46 +0000 (0:00:01.136) 0:06:03.111 ******* 2026-04-08 00:31:13.679917 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:13.679927 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:13.679938 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:13.679948 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:13.679959 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:13.679970 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:13.679980 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:13.679991 | orchestrator | 2026-04-08 00:31:13.680002 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-08 00:31:13.680013 | orchestrator | Wednesday 08 April 2026 00:30:56 +0000 (0:00:09.938) 0:06:13.049 ******* 2026-04-08 00:31:13.680023 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:13.680034 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:13.680045 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:13.680056 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:13.680066 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:13.680077 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:13.680087 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:13.680098 | orchestrator | 2026-04-08 00:31:13.680109 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-08 00:31:13.680120 | orchestrator | Wednesday 08 April 2026 00:31:07 +0000 (0:00:11.118) 0:06:24.168 ******* 2026-04-08 00:31:13.680131 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-08 00:31:13.680141 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-08 00:31:13.680152 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-08 00:31:13.680163 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-08 00:31:13.680174 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-08 00:31:13.680185 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-08 00:31:13.680196 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-08 00:31:13.680206 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-08 00:31:13.680217 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-08 00:31:13.680228 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-08 00:31:13.680238 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-08 00:31:13.680249 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-08 00:31:13.680260 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-08 00:31:13.680270 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-08 00:31:13.680281 | orchestrator | 2026-04-08 00:31:13.680292 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-08 00:31:13.680303 | orchestrator | Wednesday 08 April 2026 00:31:08 +0000 (0:00:01.168) 0:06:25.336 ******* 2026-04-08 00:31:13.680320 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:31:13.680331 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:31:13.680342 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:31:13.680353 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:31:13.680363 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:31:13.680374 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:31:13.680385 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:31:13.680395 | orchestrator | 2026-04-08 00:31:13.680406 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-08 00:31:13.680417 | orchestrator | Wednesday 08 April 2026 00:31:08 +0000 (0:00:00.640) 0:06:25.976 ******* 2026-04-08 00:31:13.680428 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:13.680439 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:13.680449 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:13.680460 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:13.680470 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:13.680481 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:13.680492 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:13.680502 | orchestrator | 2026-04-08 00:31:13.680513 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-08 00:31:13.680525 | orchestrator | Wednesday 08 April 2026 00:31:12 +0000 (0:00:03.947) 0:06:29.924 ******* 2026-04-08 00:31:13.680536 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:31:13.680566 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:31:13.680577 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:31:13.680588 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:31:13.680599 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:31:13.680609 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:31:13.680620 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:31:13.680631 | orchestrator | 2026-04-08 00:31:13.680643 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-08 00:31:13.680654 | orchestrator | Wednesday 08 April 2026 00:31:13 +0000 (0:00:00.478) 0:06:30.403 ******* 2026-04-08 00:31:13.680665 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-08 00:31:13.680675 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-08 00:31:13.680686 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:31:13.680697 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-08 00:31:13.680708 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-08 00:31:13.680719 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:31:13.680730 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-08 00:31:13.680741 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-08 00:31:13.680752 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:31:13.680770 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-08 00:31:32.505611 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-08 00:31:32.505776 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:31:32.505805 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-08 00:31:32.505821 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-08 00:31:32.505890 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:31:32.505903 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-08 00:31:32.505914 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-08 00:31:32.505926 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:31:32.505937 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-08 00:31:32.505948 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-08 00:31:32.505959 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:31:32.505971 | orchestrator | 2026-04-08 00:31:32.505984 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-08 00:31:32.506106 | orchestrator | Wednesday 08 April 2026 00:31:13 +0000 (0:00:00.517) 0:06:30.920 ******* 2026-04-08 00:31:32.506122 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:31:32.506134 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:31:32.506147 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:31:32.506160 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:31:32.506174 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:31:32.506186 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:31:32.506199 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:31:32.506211 | orchestrator | 2026-04-08 00:31:32.506224 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-08 00:31:32.506237 | orchestrator | Wednesday 08 April 2026 00:31:14 +0000 (0:00:00.473) 0:06:31.394 ******* 2026-04-08 00:31:32.506250 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:31:32.506263 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:31:32.506276 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:31:32.506288 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:31:32.506300 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:31:32.506313 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:31:32.506326 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:31:32.506339 | orchestrator | 2026-04-08 00:31:32.506352 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-08 00:31:32.506364 | orchestrator | Wednesday 08 April 2026 00:31:15 +0000 (0:00:00.601) 0:06:31.996 ******* 2026-04-08 00:31:32.506377 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:31:32.506389 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:31:32.506403 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:31:32.506414 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:31:32.506425 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:31:32.506436 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:31:32.506446 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:31:32.506457 | orchestrator | 2026-04-08 00:31:32.506468 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-08 00:31:32.506487 | orchestrator | Wednesday 08 April 2026 00:31:15 +0000 (0:00:00.502) 0:06:32.498 ******* 2026-04-08 00:31:32.506499 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:32.506510 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:31:32.506521 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:31:32.506557 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:31:32.506568 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:31:32.506579 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:31:32.506589 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:31:32.506600 | orchestrator | 2026-04-08 00:31:32.506611 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-08 00:31:32.506622 | orchestrator | Wednesday 08 April 2026 00:31:17 +0000 (0:00:01.709) 0:06:34.208 ******* 2026-04-08 00:31:32.506634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:31:32.506648 | orchestrator | 2026-04-08 00:31:32.506660 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-08 00:31:32.506671 | orchestrator | Wednesday 08 April 2026 00:31:18 +0000 (0:00:00.789) 0:06:34.998 ******* 2026-04-08 00:31:32.506682 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:32.506693 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:32.506704 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:32.506715 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:32.506726 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:32.506737 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:32.506748 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:32.506759 | orchestrator | 2026-04-08 00:31:32.506770 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-08 00:31:32.506791 | orchestrator | Wednesday 08 April 2026 00:31:19 +0000 (0:00:01.004) 0:06:36.003 ******* 2026-04-08 00:31:32.506802 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:32.506812 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:32.506823 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:32.506834 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:32.506845 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:32.506856 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:32.506866 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:32.506877 | orchestrator | 2026-04-08 00:31:32.506888 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-08 00:31:32.506899 | orchestrator | Wednesday 08 April 2026 00:31:19 +0000 (0:00:00.811) 0:06:36.814 ******* 2026-04-08 00:31:32.506910 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:32.506921 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:32.506932 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:32.506943 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:32.506953 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:32.506964 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:32.506975 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:32.506986 | orchestrator | 2026-04-08 00:31:32.506997 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-08 00:31:32.507028 | orchestrator | Wednesday 08 April 2026 00:31:21 +0000 (0:00:01.290) 0:06:38.105 ******* 2026-04-08 00:31:32.507040 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:31:32.507051 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:31:32.507062 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:31:32.507073 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:31:32.507084 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:31:32.507094 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:31:32.507105 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:31:32.507116 | orchestrator | 2026-04-08 00:31:32.507127 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-08 00:31:32.507138 | orchestrator | Wednesday 08 April 2026 00:31:22 +0000 (0:00:01.344) 0:06:39.450 ******* 2026-04-08 00:31:32.507148 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:32.507159 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:32.507170 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:32.507181 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:32.507192 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:32.507203 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:32.507214 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:32.507225 | orchestrator | 2026-04-08 00:31:32.507235 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-08 00:31:32.507246 | orchestrator | Wednesday 08 April 2026 00:31:23 +0000 (0:00:01.330) 0:06:40.780 ******* 2026-04-08 00:31:32.507257 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:32.507268 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:32.507279 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:32.507290 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:32.507300 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:32.507311 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:32.507322 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:32.507333 | orchestrator | 2026-04-08 00:31:32.507344 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-08 00:31:32.507355 | orchestrator | Wednesday 08 April 2026 00:31:25 +0000 (0:00:01.585) 0:06:42.366 ******* 2026-04-08 00:31:32.507366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:31:32.507377 | orchestrator | 2026-04-08 00:31:32.507388 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-08 00:31:32.507400 | orchestrator | Wednesday 08 April 2026 00:31:26 +0000 (0:00:00.836) 0:06:43.202 ******* 2026-04-08 00:31:32.507426 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:32.507438 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:31:32.507448 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:31:32.507459 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:31:32.507470 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:31:32.507481 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:31:32.507492 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:31:32.507503 | orchestrator | 2026-04-08 00:31:32.507514 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-08 00:31:32.507525 | orchestrator | Wednesday 08 April 2026 00:31:27 +0000 (0:00:01.377) 0:06:44.580 ******* 2026-04-08 00:31:32.507559 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:32.507570 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:31:32.507581 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:31:32.507592 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:31:32.507603 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:31:32.507613 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:31:32.507624 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:31:32.507635 | orchestrator | 2026-04-08 00:31:32.507646 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-08 00:31:32.507657 | orchestrator | Wednesday 08 April 2026 00:31:28 +0000 (0:00:01.319) 0:06:45.899 ******* 2026-04-08 00:31:32.507668 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:32.507679 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:31:32.507690 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:31:32.507700 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:31:32.507711 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:31:32.507722 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:31:32.507733 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:31:32.507744 | orchestrator | 2026-04-08 00:31:32.507755 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-08 00:31:32.507766 | orchestrator | Wednesday 08 April 2026 00:31:30 +0000 (0:00:01.119) 0:06:47.019 ******* 2026-04-08 00:31:32.507777 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:32.507788 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:31:32.507798 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:31:32.507809 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:31:32.507820 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:31:32.507831 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:31:32.507841 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:31:32.507852 | orchestrator | 2026-04-08 00:31:32.507863 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-08 00:31:32.507874 | orchestrator | Wednesday 08 April 2026 00:31:31 +0000 (0:00:01.247) 0:06:48.266 ******* 2026-04-08 00:31:32.507885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:31:32.507896 | orchestrator | 2026-04-08 00:31:32.507907 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-08 00:31:32.507918 | orchestrator | Wednesday 08 April 2026 00:31:32 +0000 (0:00:00.914) 0:06:49.181 ******* 2026-04-08 00:31:32.507929 | orchestrator | 2026-04-08 00:31:32.507940 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-08 00:31:32.507951 | orchestrator | Wednesday 08 April 2026 00:31:32 +0000 (0:00:00.040) 0:06:49.221 ******* 2026-04-08 00:31:32.507962 | orchestrator | 2026-04-08 00:31:32.507972 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-08 00:31:32.507983 | orchestrator | Wednesday 08 April 2026 00:31:32 +0000 (0:00:00.217) 0:06:49.439 ******* 2026-04-08 00:31:32.507994 | orchestrator | 2026-04-08 00:31:32.508005 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-08 00:31:32.508023 | orchestrator | Wednesday 08 April 2026 00:31:32 +0000 (0:00:00.042) 0:06:49.482 ******* 2026-04-08 00:32:00.065253 | orchestrator | 2026-04-08 00:32:00.065369 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-08 00:32:00.065412 | orchestrator | Wednesday 08 April 2026 00:31:32 +0000 (0:00:00.041) 0:06:49.523 ******* 2026-04-08 00:32:00.065420 | orchestrator | 2026-04-08 00:32:00.065427 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-08 00:32:00.065433 | orchestrator | Wednesday 08 April 2026 00:31:32 +0000 (0:00:00.047) 0:06:49.571 ******* 2026-04-08 00:32:00.065439 | orchestrator | 2026-04-08 00:32:00.065445 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-08 00:32:00.065451 | orchestrator | Wednesday 08 April 2026 00:31:32 +0000 (0:00:00.040) 0:06:49.611 ******* 2026-04-08 00:32:00.065457 | orchestrator | 2026-04-08 00:32:00.065463 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-08 00:32:00.065469 | orchestrator | Wednesday 08 April 2026 00:31:32 +0000 (0:00:00.040) 0:06:49.651 ******* 2026-04-08 00:32:00.065475 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:00.065482 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:00.065488 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:00.065494 | orchestrator | 2026-04-08 00:32:00.065568 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-08 00:32:00.065577 | orchestrator | Wednesday 08 April 2026 00:31:34 +0000 (0:00:02.020) 0:06:51.672 ******* 2026-04-08 00:32:00.065583 | orchestrator | changed: [testbed-manager] 2026-04-08 00:32:00.065590 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:32:00.065596 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:32:00.065602 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:32:00.065608 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:32:00.065614 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:32:00.065620 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:32:00.065626 | orchestrator | 2026-04-08 00:32:00.065632 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-08 00:32:00.065638 | orchestrator | Wednesday 08 April 2026 00:31:36 +0000 (0:00:01.487) 0:06:53.160 ******* 2026-04-08 00:32:00.065644 | orchestrator | changed: [testbed-manager] 2026-04-08 00:32:00.065650 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:32:00.065656 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:32:00.065662 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:32:00.065667 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:32:00.065673 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:32:00.065679 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:32:00.065685 | orchestrator | 2026-04-08 00:32:00.065691 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-08 00:32:00.065697 | orchestrator | Wednesday 08 April 2026 00:31:37 +0000 (0:00:01.199) 0:06:54.360 ******* 2026-04-08 00:32:00.065718 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:00.065724 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:32:00.065730 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:32:00.065735 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:32:00.065741 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:32:00.065747 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:32:00.065753 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:32:00.065759 | orchestrator | 2026-04-08 00:32:00.065776 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-08 00:32:00.065782 | orchestrator | Wednesday 08 April 2026 00:31:39 +0000 (0:00:02.460) 0:06:56.820 ******* 2026-04-08 00:32:00.065789 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:00.065796 | orchestrator | 2026-04-08 00:32:00.065803 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-08 00:32:00.065810 | orchestrator | Wednesday 08 April 2026 00:31:39 +0000 (0:00:00.118) 0:06:56.939 ******* 2026-04-08 00:32:00.065817 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:00.065823 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:32:00.065830 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:32:00.065837 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:32:00.065845 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:32:00.065858 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:32:00.065865 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:32:00.065871 | orchestrator | 2026-04-08 00:32:00.065879 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-08 00:32:00.065887 | orchestrator | Wednesday 08 April 2026 00:31:41 +0000 (0:00:01.233) 0:06:58.173 ******* 2026-04-08 00:32:00.065894 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:00.065901 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:00.065908 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:00.065914 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:00.065921 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:00.065928 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:00.065935 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:00.065942 | orchestrator | 2026-04-08 00:32:00.065948 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-08 00:32:00.065955 | orchestrator | Wednesday 08 April 2026 00:31:41 +0000 (0:00:00.507) 0:06:58.680 ******* 2026-04-08 00:32:00.065963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:32:00.065973 | orchestrator | 2026-04-08 00:32:00.065981 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-08 00:32:00.065988 | orchestrator | Wednesday 08 April 2026 00:31:42 +0000 (0:00:00.872) 0:06:59.552 ******* 2026-04-08 00:32:00.065995 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:00.066002 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:00.066008 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:00.066054 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:00.066060 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:00.066066 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:00.066072 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:00.066078 | orchestrator | 2026-04-08 00:32:00.066084 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-08 00:32:00.066090 | orchestrator | Wednesday 08 April 2026 00:31:43 +0000 (0:00:00.990) 0:07:00.543 ******* 2026-04-08 00:32:00.066096 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-08 00:32:00.066115 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-08 00:32:00.066123 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-08 00:32:00.066129 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-08 00:32:00.066134 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-08 00:32:00.066140 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-08 00:32:00.066146 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-08 00:32:00.066152 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-08 00:32:00.066158 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-08 00:32:00.066164 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-08 00:32:00.066170 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-08 00:32:00.066175 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-08 00:32:00.066181 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-08 00:32:00.066190 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-08 00:32:00.066203 | orchestrator | 2026-04-08 00:32:00.066216 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-08 00:32:00.066225 | orchestrator | Wednesday 08 April 2026 00:31:46 +0000 (0:00:02.481) 0:07:03.025 ******* 2026-04-08 00:32:00.066234 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:00.066244 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:00.066253 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:00.066263 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:00.066280 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:00.066290 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:00.066300 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:00.066309 | orchestrator | 2026-04-08 00:32:00.066315 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-08 00:32:00.066321 | orchestrator | Wednesday 08 April 2026 00:31:46 +0000 (0:00:00.467) 0:07:03.492 ******* 2026-04-08 00:32:00.066329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:32:00.066336 | orchestrator | 2026-04-08 00:32:00.066342 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-08 00:32:00.066348 | orchestrator | Wednesday 08 April 2026 00:31:47 +0000 (0:00:00.899) 0:07:04.392 ******* 2026-04-08 00:32:00.066354 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:00.066360 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:00.066365 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:00.066371 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:00.066377 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:00.066383 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:00.066388 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:00.066394 | orchestrator | 2026-04-08 00:32:00.066405 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-08 00:32:00.066411 | orchestrator | Wednesday 08 April 2026 00:31:48 +0000 (0:00:00.841) 0:07:05.233 ******* 2026-04-08 00:32:00.066417 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:00.066422 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:00.066428 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:00.066434 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:00.066439 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:00.066445 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:00.066451 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:00.066457 | orchestrator | 2026-04-08 00:32:00.066462 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-08 00:32:00.066468 | orchestrator | Wednesday 08 April 2026 00:31:49 +0000 (0:00:00.892) 0:07:06.125 ******* 2026-04-08 00:32:00.066474 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:00.066480 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:00.066486 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:00.066492 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:00.066497 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:00.066525 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:00.066531 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:00.066537 | orchestrator | 2026-04-08 00:32:00.066542 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-08 00:32:00.066548 | orchestrator | Wednesday 08 April 2026 00:31:49 +0000 (0:00:00.470) 0:07:06.596 ******* 2026-04-08 00:32:00.066554 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:00.066560 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:00.066566 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:00.066571 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:00.066577 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:00.066583 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:00.066588 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:00.066594 | orchestrator | 2026-04-08 00:32:00.066600 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-08 00:32:00.066606 | orchestrator | Wednesday 08 April 2026 00:31:51 +0000 (0:00:01.449) 0:07:08.045 ******* 2026-04-08 00:32:00.066612 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:00.066617 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:00.066623 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:00.066629 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:00.066635 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:00.066646 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:00.066652 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:00.066658 | orchestrator | 2026-04-08 00:32:00.066664 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-08 00:32:00.066669 | orchestrator | Wednesday 08 April 2026 00:31:51 +0000 (0:00:00.641) 0:07:08.687 ******* 2026-04-08 00:32:00.066675 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:00.066681 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:32:00.066687 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:32:00.066693 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:32:00.066698 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:32:00.066704 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:32:00.066716 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:32:32.398739 | orchestrator | 2026-04-08 00:32:32.398859 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-08 00:32:32.398875 | orchestrator | Wednesday 08 April 2026 00:32:00 +0000 (0:00:08.415) 0:07:17.103 ******* 2026-04-08 00:32:32.398887 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:32.398900 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:32:32.398911 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:32:32.398920 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:32:32.398930 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:32:32.398939 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:32:32.398948 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:32:32.398957 | orchestrator | 2026-04-08 00:32:32.398967 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-08 00:32:32.398978 | orchestrator | Wednesday 08 April 2026 00:32:01 +0000 (0:00:01.396) 0:07:18.499 ******* 2026-04-08 00:32:32.398989 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:32.398999 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:32:32.399008 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:32:32.399017 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:32:32.399026 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:32:32.399035 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:32:32.399045 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:32:32.399054 | orchestrator | 2026-04-08 00:32:32.399065 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-08 00:32:32.399076 | orchestrator | Wednesday 08 April 2026 00:32:03 +0000 (0:00:01.736) 0:07:20.236 ******* 2026-04-08 00:32:32.399087 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:32.399096 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:32:32.399106 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:32:32.399115 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:32:32.399124 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:32:32.399133 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:32:32.399143 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:32:32.399153 | orchestrator | 2026-04-08 00:32:32.399164 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-08 00:32:32.399175 | orchestrator | Wednesday 08 April 2026 00:32:05 +0000 (0:00:01.859) 0:07:22.096 ******* 2026-04-08 00:32:32.399185 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:32.399194 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:32.399203 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:32.399212 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:32.399222 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:32.399231 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:32.399242 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:32.399253 | orchestrator | 2026-04-08 00:32:32.399263 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-08 00:32:32.399274 | orchestrator | Wednesday 08 April 2026 00:32:05 +0000 (0:00:00.853) 0:07:22.949 ******* 2026-04-08 00:32:32.399284 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:32.399295 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:32.399305 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:32.399345 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:32.399357 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:32.399368 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:32.399378 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:32.399387 | orchestrator | 2026-04-08 00:32:32.399398 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-08 00:32:32.399407 | orchestrator | Wednesday 08 April 2026 00:32:06 +0000 (0:00:00.812) 0:07:23.761 ******* 2026-04-08 00:32:32.399416 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:32.399427 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:32.399438 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:32.399447 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:32.399456 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:32.399465 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:32.399534 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:32.399546 | orchestrator | 2026-04-08 00:32:32.399556 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-08 00:32:32.399565 | orchestrator | Wednesday 08 April 2026 00:32:07 +0000 (0:00:00.700) 0:07:24.462 ******* 2026-04-08 00:32:32.399574 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:32.399583 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:32.399593 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:32.399603 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:32.399613 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:32.399624 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:32.399635 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:32.399645 | orchestrator | 2026-04-08 00:32:32.399655 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-08 00:32:32.399664 | orchestrator | Wednesday 08 April 2026 00:32:08 +0000 (0:00:00.531) 0:07:24.994 ******* 2026-04-08 00:32:32.399673 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:32.399682 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:32.399691 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:32.399701 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:32.399712 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:32.399723 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:32.399733 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:32.399742 | orchestrator | 2026-04-08 00:32:32.399751 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-08 00:32:32.399761 | orchestrator | Wednesday 08 April 2026 00:32:08 +0000 (0:00:00.508) 0:07:25.502 ******* 2026-04-08 00:32:32.399769 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:32.399779 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:32.399789 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:32.399799 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:32.399809 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:32.399820 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:32.399830 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:32.399839 | orchestrator | 2026-04-08 00:32:32.399849 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-08 00:32:32.399858 | orchestrator | Wednesday 08 April 2026 00:32:09 +0000 (0:00:00.501) 0:07:26.003 ******* 2026-04-08 00:32:32.399867 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:32.399877 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:32.399887 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:32.399898 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:32.399909 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:32.399918 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:32.399928 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:32.399937 | orchestrator | 2026-04-08 00:32:32.399964 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-08 00:32:32.399975 | orchestrator | Wednesday 08 April 2026 00:32:14 +0000 (0:00:05.017) 0:07:31.021 ******* 2026-04-08 00:32:32.399985 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:32.399996 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:32.400019 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:32.400029 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:32.400038 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:32.400046 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:32.400056 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:32.400064 | orchestrator | 2026-04-08 00:32:32.400075 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-08 00:32:32.400085 | orchestrator | Wednesday 08 April 2026 00:32:14 +0000 (0:00:00.711) 0:07:31.732 ******* 2026-04-08 00:32:32.400098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:32:32.400111 | orchestrator | 2026-04-08 00:32:32.400120 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-08 00:32:32.400130 | orchestrator | Wednesday 08 April 2026 00:32:15 +0000 (0:00:00.793) 0:07:32.526 ******* 2026-04-08 00:32:32.400157 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:32.400168 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:32.400178 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:32.400189 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:32.400200 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:32.400209 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:32.400219 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:32.400228 | orchestrator | 2026-04-08 00:32:32.400237 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-08 00:32:32.400246 | orchestrator | Wednesday 08 April 2026 00:32:17 +0000 (0:00:01.947) 0:07:34.474 ******* 2026-04-08 00:32:32.400257 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:32.400267 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:32.400277 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:32.400288 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:32.400297 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:32.400306 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:32.400316 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:32.400325 | orchestrator | 2026-04-08 00:32:32.400335 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-08 00:32:32.400345 | orchestrator | Wednesday 08 April 2026 00:32:18 +0000 (0:00:01.207) 0:07:35.681 ******* 2026-04-08 00:32:32.400355 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:32.400365 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:32.400376 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:32.400385 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:32.400396 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:32.400406 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:32.400417 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:32.400426 | orchestrator | 2026-04-08 00:32:32.400435 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-08 00:32:32.400451 | orchestrator | Wednesday 08 April 2026 00:32:19 +0000 (0:00:00.838) 0:07:36.519 ******* 2026-04-08 00:32:32.400462 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-08 00:32:32.400495 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-08 00:32:32.400505 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-08 00:32:32.400514 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-08 00:32:32.400523 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-08 00:32:32.400534 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-08 00:32:32.400553 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-08 00:32:32.400563 | orchestrator | 2026-04-08 00:32:32.400572 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-08 00:32:32.400581 | orchestrator | Wednesday 08 April 2026 00:32:21 +0000 (0:00:01.698) 0:07:38.218 ******* 2026-04-08 00:32:32.400591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:32:32.400602 | orchestrator | 2026-04-08 00:32:32.400612 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-08 00:32:32.400623 | orchestrator | Wednesday 08 April 2026 00:32:22 +0000 (0:00:00.953) 0:07:39.171 ******* 2026-04-08 00:32:32.400634 | orchestrator | changed: [testbed-manager] 2026-04-08 00:32:32.400644 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:32:32.400654 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:32:32.400664 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:32:32.400673 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:32:32.400682 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:32:32.400691 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:32:32.400702 | orchestrator | 2026-04-08 00:32:32.400721 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-08 00:33:02.673299 | orchestrator | Wednesday 08 April 2026 00:32:32 +0000 (0:00:10.200) 0:07:49.373 ******* 2026-04-08 00:33:02.673414 | orchestrator | ok: [testbed-manager] 2026-04-08 00:33:02.673430 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:33:02.674137 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:33:02.674158 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:33:02.674194 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:33:02.674204 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:33:02.674213 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:33:02.674222 | orchestrator | 2026-04-08 00:33:02.674232 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-08 00:33:02.674242 | orchestrator | Wednesday 08 April 2026 00:32:34 +0000 (0:00:01.673) 0:07:51.046 ******* 2026-04-08 00:33:02.674251 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:33:02.674261 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:33:02.674269 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:33:02.674278 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:33:02.674287 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:33:02.674296 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:33:02.674305 | orchestrator | 2026-04-08 00:33:02.674314 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-08 00:33:02.674324 | orchestrator | Wednesday 08 April 2026 00:32:35 +0000 (0:00:01.329) 0:07:52.375 ******* 2026-04-08 00:33:02.674333 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:02.674343 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:02.674352 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:02.674361 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:02.674370 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:02.674378 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:02.674387 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:02.674396 | orchestrator | 2026-04-08 00:33:02.674405 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-08 00:33:02.674414 | orchestrator | 2026-04-08 00:33:02.674423 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-08 00:33:02.674432 | orchestrator | Wednesday 08 April 2026 00:32:36 +0000 (0:00:01.141) 0:07:53.517 ******* 2026-04-08 00:33:02.674499 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:33:02.674513 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:33:02.674555 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:33:02.674565 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:33:02.674574 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:33:02.674582 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:33:02.674591 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:33:02.674599 | orchestrator | 2026-04-08 00:33:02.674608 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-08 00:33:02.674617 | orchestrator | 2026-04-08 00:33:02.674626 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-08 00:33:02.674634 | orchestrator | Wednesday 08 April 2026 00:32:36 +0000 (0:00:00.429) 0:07:53.947 ******* 2026-04-08 00:33:02.674643 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:02.674652 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:02.674661 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:02.674669 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:02.674679 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:02.674701 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:02.674710 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:02.674718 | orchestrator | 2026-04-08 00:33:02.674727 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-08 00:33:02.674736 | orchestrator | Wednesday 08 April 2026 00:32:38 +0000 (0:00:01.254) 0:07:55.201 ******* 2026-04-08 00:33:02.674745 | orchestrator | ok: [testbed-manager] 2026-04-08 00:33:02.674755 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:33:02.674769 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:33:02.674793 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:33:02.674816 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:33:02.674834 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:33:02.674853 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:33:02.674871 | orchestrator | 2026-04-08 00:33:02.674891 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-08 00:33:02.674902 | orchestrator | Wednesday 08 April 2026 00:32:39 +0000 (0:00:01.649) 0:07:56.850 ******* 2026-04-08 00:33:02.674913 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:33:02.674924 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:33:02.674935 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:33:02.674946 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:33:02.674957 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:33:02.674968 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:33:02.674978 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:33:02.674989 | orchestrator | 2026-04-08 00:33:02.675000 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-08 00:33:02.675011 | orchestrator | Wednesday 08 April 2026 00:32:40 +0000 (0:00:00.548) 0:07:57.399 ******* 2026-04-08 00:33:02.675022 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:33:02.675035 | orchestrator | 2026-04-08 00:33:02.675046 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-08 00:33:02.675057 | orchestrator | Wednesday 08 April 2026 00:32:41 +0000 (0:00:00.808) 0:07:58.208 ******* 2026-04-08 00:33:02.675070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:33:02.675083 | orchestrator | 2026-04-08 00:33:02.675094 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-08 00:33:02.675105 | orchestrator | Wednesday 08 April 2026 00:32:42 +0000 (0:00:00.914) 0:07:59.122 ******* 2026-04-08 00:33:02.675116 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:02.675127 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:02.675138 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:02.675149 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:02.675171 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:02.675183 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:02.675193 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:02.675204 | orchestrator | 2026-04-08 00:33:02.675236 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-08 00:33:02.675247 | orchestrator | Wednesday 08 April 2026 00:32:51 +0000 (0:00:09.166) 0:08:08.289 ******* 2026-04-08 00:33:02.675258 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:02.675269 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:02.675280 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:02.675291 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:02.675302 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:02.675313 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:02.675340 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:02.675351 | orchestrator | 2026-04-08 00:33:02.675363 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-08 00:33:02.675384 | orchestrator | Wednesday 08 April 2026 00:32:52 +0000 (0:00:00.779) 0:08:09.068 ******* 2026-04-08 00:33:02.675396 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:02.675425 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:02.675462 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:02.675474 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:02.675485 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:02.675496 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:02.675506 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:02.675517 | orchestrator | 2026-04-08 00:33:02.675528 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-08 00:33:02.675539 | orchestrator | Wednesday 08 April 2026 00:32:54 +0000 (0:00:01.978) 0:08:11.046 ******* 2026-04-08 00:33:02.675550 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:02.675561 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:02.675572 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:02.675582 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:02.675593 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:02.675603 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:02.675614 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:02.675625 | orchestrator | 2026-04-08 00:33:02.675636 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-08 00:33:02.675647 | orchestrator | Wednesday 08 April 2026 00:32:55 +0000 (0:00:01.896) 0:08:12.943 ******* 2026-04-08 00:33:02.675657 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:02.675668 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:02.675679 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:02.675689 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:02.675700 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:02.675711 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:02.675721 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:02.675732 | orchestrator | 2026-04-08 00:33:02.675743 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-08 00:33:02.675755 | orchestrator | Wednesday 08 April 2026 00:32:57 +0000 (0:00:01.201) 0:08:14.145 ******* 2026-04-08 00:33:02.675773 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:02.675792 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:02.675810 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:02.675828 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:02.675847 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:02.675875 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:02.675894 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:02.675908 | orchestrator | 2026-04-08 00:33:02.675918 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-08 00:33:02.675929 | orchestrator | 2026-04-08 00:33:02.675940 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-08 00:33:02.675951 | orchestrator | Wednesday 08 April 2026 00:32:58 +0000 (0:00:01.138) 0:08:15.283 ******* 2026-04-08 00:33:02.675972 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:33:02.675983 | orchestrator | 2026-04-08 00:33:02.675994 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-08 00:33:02.676005 | orchestrator | Wednesday 08 April 2026 00:32:59 +0000 (0:00:00.919) 0:08:16.204 ******* 2026-04-08 00:33:02.676016 | orchestrator | ok: [testbed-manager] 2026-04-08 00:33:02.676027 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:33:02.676054 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:33:02.676065 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:33:02.676076 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:33:02.676087 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:33:02.676098 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:33:02.676109 | orchestrator | 2026-04-08 00:33:02.676120 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-08 00:33:02.676131 | orchestrator | Wednesday 08 April 2026 00:33:00 +0000 (0:00:00.807) 0:08:17.011 ******* 2026-04-08 00:33:02.676142 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:02.676153 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:02.676164 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:02.676174 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:02.676185 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:02.676196 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:02.676207 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:02.676218 | orchestrator | 2026-04-08 00:33:02.676229 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-08 00:33:02.676240 | orchestrator | Wednesday 08 April 2026 00:33:01 +0000 (0:00:01.150) 0:08:18.162 ******* 2026-04-08 00:33:02.676251 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:33:02.676262 | orchestrator | 2026-04-08 00:33:02.676273 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-08 00:33:02.676284 | orchestrator | Wednesday 08 April 2026 00:33:01 +0000 (0:00:00.714) 0:08:18.876 ******* 2026-04-08 00:33:02.676295 | orchestrator | ok: [testbed-manager] 2026-04-08 00:33:02.676306 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:33:02.676317 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:33:02.676328 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:33:02.676339 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:33:02.676349 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:33:02.676360 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:33:02.676371 | orchestrator | 2026-04-08 00:33:02.676391 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-08 00:33:03.946262 | orchestrator | Wednesday 08 April 2026 00:33:02 +0000 (0:00:00.771) 0:08:19.648 ******* 2026-04-08 00:33:03.946381 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:03.946401 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:03.946414 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:03.946426 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:03.946495 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:03.946508 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:03.946519 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:03.946531 | orchestrator | 2026-04-08 00:33:03.946543 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:33:03.946559 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-08 00:33:03.946587 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-08 00:33:03.946610 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-08 00:33:03.946663 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-08 00:33:03.946684 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-08 00:33:03.946703 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-08 00:33:03.946721 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-08 00:33:03.946739 | orchestrator | 2026-04-08 00:33:03.946759 | orchestrator | 2026-04-08 00:33:03.946780 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:33:03.946799 | orchestrator | Wednesday 08 April 2026 00:33:03 +0000 (0:00:01.097) 0:08:20.745 ******* 2026-04-08 00:33:03.946820 | orchestrator | =============================================================================== 2026-04-08 00:33:03.946839 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.47s 2026-04-08 00:33:03.946859 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.24s 2026-04-08 00:33:03.946878 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.56s 2026-04-08 00:33:03.946917 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.83s 2026-04-08 00:33:03.946937 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.11s 2026-04-08 00:33:03.946950 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.12s 2026-04-08 00:33:03.946964 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.92s 2026-04-08 00:33:03.946978 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.22s 2026-04-08 00:33:03.946989 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.20s 2026-04-08 00:33:03.947000 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.94s 2026-04-08 00:33:03.947011 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.24s 2026-04-08 00:33:03.947021 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.17s 2026-04-08 00:33:03.947032 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.99s 2026-04-08 00:33:03.947044 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.51s 2026-04-08 00:33:03.947055 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.42s 2026-04-08 00:33:03.947065 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.35s 2026-04-08 00:33:03.947076 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.62s 2026-04-08 00:33:03.947087 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.80s 2026-04-08 00:33:03.947098 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.36s 2026-04-08 00:33:03.947109 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.05s 2026-04-08 00:33:04.072844 | orchestrator | + osism apply fail2ban 2026-04-08 00:33:15.481730 | orchestrator | 2026-04-08 00:33:15 | INFO  | Prepare task for execution of fail2ban. 2026-04-08 00:33:15.560989 | orchestrator | 2026-04-08 00:33:15 | INFO  | Task 91e69072-66eb-4fe8-8b8d-bedb07073ec7 (fail2ban) was prepared for execution. 2026-04-08 00:33:15.561102 | orchestrator | 2026-04-08 00:33:15 | INFO  | It takes a moment until task 91e69072-66eb-4fe8-8b8d-bedb07073ec7 (fail2ban) has been started and output is visible here. 2026-04-08 00:33:36.335330 | orchestrator | 2026-04-08 00:33:36.335461 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-08 00:33:36.335490 | orchestrator | 2026-04-08 00:33:36.335495 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-08 00:33:36.335548 | orchestrator | Wednesday 08 April 2026 00:33:18 +0000 (0:00:00.334) 0:00:00.334 ******* 2026-04-08 00:33:36.335556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:33:36.335562 | orchestrator | 2026-04-08 00:33:36.335567 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-08 00:33:36.335571 | orchestrator | Wednesday 08 April 2026 00:33:20 +0000 (0:00:01.092) 0:00:01.427 ******* 2026-04-08 00:33:36.335576 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:36.335583 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:36.335589 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:36.335595 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:36.335601 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:36.335607 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:36.335613 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:36.335620 | orchestrator | 2026-04-08 00:33:36.335628 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-08 00:33:36.335633 | orchestrator | Wednesday 08 April 2026 00:33:31 +0000 (0:00:11.368) 0:00:12.795 ******* 2026-04-08 00:33:36.335639 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:36.335645 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:36.335652 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:36.335659 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:36.335667 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:36.335673 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:36.335681 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:36.335687 | orchestrator | 2026-04-08 00:33:36.335693 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-08 00:33:36.335699 | orchestrator | Wednesday 08 April 2026 00:33:33 +0000 (0:00:01.629) 0:00:14.425 ******* 2026-04-08 00:33:36.335706 | orchestrator | ok: [testbed-manager] 2026-04-08 00:33:36.335713 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:33:36.335719 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:33:36.335725 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:33:36.335732 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:33:36.335737 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:33:36.335740 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:33:36.335744 | orchestrator | 2026-04-08 00:33:36.335748 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-08 00:33:36.335752 | orchestrator | Wednesday 08 April 2026 00:33:34 +0000 (0:00:01.250) 0:00:15.675 ******* 2026-04-08 00:33:36.335756 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:36.335759 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:36.335764 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:36.335767 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:36.335771 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:36.335775 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:36.335779 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:36.335783 | orchestrator | 2026-04-08 00:33:36.335786 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:33:36.335801 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:33:36.335807 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:33:36.335811 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:33:36.335815 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:33:36.335826 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:33:36.335830 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:33:36.335834 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:33:36.335837 | orchestrator | 2026-04-08 00:33:36.335841 | orchestrator | 2026-04-08 00:33:36.335845 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:33:36.335849 | orchestrator | Wednesday 08 April 2026 00:33:35 +0000 (0:00:01.646) 0:00:17.321 ******* 2026-04-08 00:33:36.335853 | orchestrator | =============================================================================== 2026-04-08 00:33:36.335857 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.37s 2026-04-08 00:33:36.335860 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.65s 2026-04-08 00:33:36.335864 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.63s 2026-04-08 00:33:36.335868 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.25s 2026-04-08 00:33:36.335872 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.09s 2026-04-08 00:33:36.508733 | orchestrator | + osism apply network 2026-04-08 00:33:47.876069 | orchestrator | 2026-04-08 00:33:47 | INFO  | Prepare task for execution of network. 2026-04-08 00:33:47.958121 | orchestrator | 2026-04-08 00:33:47 | INFO  | Task 505a8749-374b-45c6-879f-d02c3f5af47a (network) was prepared for execution. 2026-04-08 00:33:47.958218 | orchestrator | 2026-04-08 00:33:47 | INFO  | It takes a moment until task 505a8749-374b-45c6-879f-d02c3f5af47a (network) has been started and output is visible here. 2026-04-08 00:34:15.687182 | orchestrator | 2026-04-08 00:34:15.687286 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-08 00:34:15.687302 | orchestrator | 2026-04-08 00:34:15.687313 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-08 00:34:15.687325 | orchestrator | Wednesday 08 April 2026 00:33:51 +0000 (0:00:00.316) 0:00:00.316 ******* 2026-04-08 00:34:15.687335 | orchestrator | ok: [testbed-manager] 2026-04-08 00:34:15.687346 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:34:15.687356 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:34:15.687424 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:34:15.687434 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:34:15.687444 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:34:15.687454 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:34:15.687464 | orchestrator | 2026-04-08 00:34:15.687474 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-08 00:34:15.687484 | orchestrator | Wednesday 08 April 2026 00:33:51 +0000 (0:00:00.624) 0:00:00.941 ******* 2026-04-08 00:34:15.687497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:34:15.687510 | orchestrator | 2026-04-08 00:34:15.687520 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-08 00:34:15.687530 | orchestrator | Wednesday 08 April 2026 00:33:52 +0000 (0:00:01.097) 0:00:02.038 ******* 2026-04-08 00:34:15.687540 | orchestrator | ok: [testbed-manager] 2026-04-08 00:34:15.687550 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:34:15.687560 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:34:15.687570 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:34:15.687580 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:34:15.687590 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:34:15.687624 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:34:15.687634 | orchestrator | 2026-04-08 00:34:15.687644 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-08 00:34:15.687654 | orchestrator | Wednesday 08 April 2026 00:33:55 +0000 (0:00:02.635) 0:00:04.674 ******* 2026-04-08 00:34:15.687664 | orchestrator | ok: [testbed-manager] 2026-04-08 00:34:15.687674 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:34:15.687683 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:34:15.687693 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:34:15.687703 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:34:15.687712 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:34:15.687724 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:34:15.687736 | orchestrator | 2026-04-08 00:34:15.687748 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-08 00:34:15.687759 | orchestrator | Wednesday 08 April 2026 00:33:57 +0000 (0:00:01.677) 0:00:06.351 ******* 2026-04-08 00:34:15.687770 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-08 00:34:15.687782 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-08 00:34:15.687794 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-08 00:34:15.687805 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-08 00:34:15.687817 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-08 00:34:15.687828 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-08 00:34:15.687839 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-08 00:34:15.687851 | orchestrator | 2026-04-08 00:34:15.687863 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-04-08 00:34:15.687875 | orchestrator | Wednesday 08 April 2026 00:33:58 +0000 (0:00:01.004) 0:00:07.356 ******* 2026-04-08 00:34:15.687886 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:34:15.687899 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:34:15.687911 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:34:15.687922 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:34:15.687934 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:34:15.687945 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:34:15.687957 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:34:15.687968 | orchestrator | 2026-04-08 00:34:15.687979 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-04-08 00:34:15.687989 | orchestrator | Wednesday 08 April 2026 00:33:58 +0000 (0:00:00.663) 0:00:08.019 ******* 2026-04-08 00:34:15.687999 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:34:15.688009 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:34:15.688019 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:34:15.688028 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:34:15.688038 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:34:15.688047 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:34:15.688057 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:34:15.688067 | orchestrator | 2026-04-08 00:34:15.688076 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-04-08 00:34:15.688086 | orchestrator | Wednesday 08 April 2026 00:33:59 +0000 (0:00:00.678) 0:00:08.697 ******* 2026-04-08 00:34:15.688096 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:34:15.688106 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:34:15.688115 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:34:15.688125 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:34:15.688135 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:34:15.688144 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:34:15.688154 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:34:15.688163 | orchestrator | 2026-04-08 00:34:15.688173 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-08 00:34:15.688183 | orchestrator | Wednesday 08 April 2026 00:34:00 +0000 (0:00:00.644) 0:00:09.342 ******* 2026-04-08 00:34:15.688193 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:34:15.688209 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-08 00:34:15.688219 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:34:15.688229 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-08 00:34:15.688238 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-08 00:34:15.688248 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-08 00:34:15.688258 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-08 00:34:15.688267 | orchestrator | 2026-04-08 00:34:15.688294 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-08 00:34:15.688310 | orchestrator | Wednesday 08 April 2026 00:34:03 +0000 (0:00:02.987) 0:00:12.329 ******* 2026-04-08 00:34:15.688326 | orchestrator | changed: [testbed-manager] 2026-04-08 00:34:15.688343 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:34:15.688380 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:34:15.688396 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:34:15.688434 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:34:15.688451 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:34:15.688468 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:34:15.688482 | orchestrator | 2026-04-08 00:34:15.688498 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-08 00:34:15.688513 | orchestrator | Wednesday 08 April 2026 00:34:04 +0000 (0:00:01.537) 0:00:13.867 ******* 2026-04-08 00:34:15.688529 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:34:15.688545 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:34:15.688562 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-08 00:34:15.688578 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-08 00:34:15.688594 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-08 00:34:15.688607 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-08 00:34:15.688617 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-08 00:34:15.688627 | orchestrator | 2026-04-08 00:34:15.688636 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-08 00:34:15.688646 | orchestrator | Wednesday 08 April 2026 00:34:06 +0000 (0:00:01.607) 0:00:15.474 ******* 2026-04-08 00:34:15.688656 | orchestrator | ok: [testbed-manager] 2026-04-08 00:34:15.688665 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:34:15.688675 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:34:15.688685 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:34:15.688694 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:34:15.688703 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:34:15.688713 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:34:15.688722 | orchestrator | 2026-04-08 00:34:15.688732 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-08 00:34:15.688742 | orchestrator | Wednesday 08 April 2026 00:34:07 +0000 (0:00:01.025) 0:00:16.499 ******* 2026-04-08 00:34:15.688751 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:34:15.688761 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:34:15.688770 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:34:15.688780 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:34:15.688789 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:34:15.688799 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:34:15.688808 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:34:15.688818 | orchestrator | 2026-04-08 00:34:15.688827 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-08 00:34:15.688837 | orchestrator | Wednesday 08 April 2026 00:34:08 +0000 (0:00:00.565) 0:00:17.065 ******* 2026-04-08 00:34:15.688846 | orchestrator | ok: [testbed-manager] 2026-04-08 00:34:15.688856 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:34:15.688865 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:34:15.688875 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:34:15.688885 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:34:15.688894 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:34:15.688910 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:34:15.688920 | orchestrator | 2026-04-08 00:34:15.688930 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-08 00:34:15.688949 | orchestrator | Wednesday 08 April 2026 00:34:10 +0000 (0:00:02.163) 0:00:19.228 ******* 2026-04-08 00:34:15.688959 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:34:15.688969 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:34:15.688978 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:34:15.688988 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:34:15.688997 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:34:15.689006 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:34:15.689016 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-08 00:34:15.689028 | orchestrator | 2026-04-08 00:34:15.689037 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-08 00:34:15.689047 | orchestrator | Wednesday 08 April 2026 00:34:11 +0000 (0:00:00.896) 0:00:20.125 ******* 2026-04-08 00:34:15.689057 | orchestrator | ok: [testbed-manager] 2026-04-08 00:34:15.689066 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:34:15.689076 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:34:15.689085 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:34:15.689095 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:34:15.689104 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:34:15.689113 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:34:15.689123 | orchestrator | 2026-04-08 00:34:15.689132 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-08 00:34:15.689142 | orchestrator | Wednesday 08 April 2026 00:34:12 +0000 (0:00:01.717) 0:00:21.843 ******* 2026-04-08 00:34:15.689153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:34:15.689165 | orchestrator | 2026-04-08 00:34:15.689174 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-08 00:34:15.689184 | orchestrator | Wednesday 08 April 2026 00:34:14 +0000 (0:00:01.215) 0:00:23.059 ******* 2026-04-08 00:34:15.689193 | orchestrator | ok: [testbed-manager] 2026-04-08 00:34:15.689203 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:34:15.689212 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:34:15.689222 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:34:15.689231 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:34:15.689241 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:34:15.689250 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:34:15.689260 | orchestrator | 2026-04-08 00:34:15.689270 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-08 00:34:15.689279 | orchestrator | Wednesday 08 April 2026 00:34:15 +0000 (0:00:01.157) 0:00:24.217 ******* 2026-04-08 00:34:15.689289 | orchestrator | ok: [testbed-manager] 2026-04-08 00:34:15.689299 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:34:15.689308 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:34:15.689317 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:34:15.689327 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:34:15.689347 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:34:31.770962 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:34:31.771070 | orchestrator | 2026-04-08 00:34:31.771084 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-08 00:34:31.771097 | orchestrator | Wednesday 08 April 2026 00:34:15 +0000 (0:00:00.634) 0:00:24.851 ******* 2026-04-08 00:34:31.771108 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-08 00:34:31.771118 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-08 00:34:31.771128 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-08 00:34:31.771137 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-08 00:34:31.771147 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-08 00:34:31.771181 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-08 00:34:31.771191 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-08 00:34:31.771201 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-08 00:34:31.771210 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-08 00:34:31.771220 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-08 00:34:31.771230 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-08 00:34:31.771240 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-08 00:34:31.771249 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-08 00:34:31.771259 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-08 00:34:31.771268 | orchestrator | 2026-04-08 00:34:31.771278 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-08 00:34:31.771287 | orchestrator | Wednesday 08 April 2026 00:34:17 +0000 (0:00:01.215) 0:00:26.067 ******* 2026-04-08 00:34:31.771297 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:34:31.771307 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:34:31.771316 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:34:31.771326 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:34:31.771335 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:34:31.771411 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:34:31.771422 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:34:31.771431 | orchestrator | 2026-04-08 00:34:31.771441 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-08 00:34:31.771451 | orchestrator | Wednesday 08 April 2026 00:34:17 +0000 (0:00:00.604) 0:00:26.671 ******* 2026-04-08 00:34:31.771476 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-2, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:34:31.771489 | orchestrator | 2026-04-08 00:34:31.771499 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-08 00:34:31.771509 | orchestrator | Wednesday 08 April 2026 00:34:21 +0000 (0:00:04.340) 0:00:31.012 ******* 2026-04-08 00:34:31.771521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:34:31.771532 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-08 00:34:31.771544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:34:31.771555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:34:31.771565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:34:31.771575 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:34:31.771609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-08 00:34:31.771620 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:34:31.771631 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-08 00:34:31.771647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-08 00:34:31.771658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-08 00:34:31.771668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-08 00:34:31.771678 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-08 00:34:31.771688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-08 00:34:31.771698 | orchestrator | 2026-04-08 00:34:31.771713 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-08 00:34:31.771724 | orchestrator | Wednesday 08 April 2026 00:34:27 +0000 (0:00:05.120) 0:00:36.132 ******* 2026-04-08 00:34:31.771734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:34:31.771744 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-08 00:34:31.771754 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-08 00:34:31.771764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:34:31.771774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-08 00:34:31.771789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:34:31.771800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:34:31.771816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:34:43.957272 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:34:43.957455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-08 00:34:43.957475 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-08 00:34:43.957489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-08 00:34:43.957500 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-08 00:34:43.957512 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-08 00:34:43.957524 | orchestrator | 2026-04-08 00:34:43.957537 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-08 00:34:43.957549 | orchestrator | Wednesday 08 April 2026 00:34:32 +0000 (0:00:05.765) 0:00:41.898 ******* 2026-04-08 00:34:43.957580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:34:43.957593 | orchestrator | 2026-04-08 00:34:43.957604 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-08 00:34:43.957615 | orchestrator | Wednesday 08 April 2026 00:34:34 +0000 (0:00:01.213) 0:00:43.112 ******* 2026-04-08 00:34:43.957626 | orchestrator | ok: [testbed-manager] 2026-04-08 00:34:43.957639 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:34:43.957650 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:34:43.957661 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:34:43.957672 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:34:43.957683 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:34:43.957694 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:34:43.957705 | orchestrator | 2026-04-08 00:34:43.957740 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-08 00:34:43.957752 | orchestrator | Wednesday 08 April 2026 00:34:35 +0000 (0:00:00.960) 0:00:44.072 ******* 2026-04-08 00:34:43.957763 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-08 00:34:43.957774 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-08 00:34:43.957785 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-08 00:34:43.957797 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-08 00:34:43.957809 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:34:43.957823 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-08 00:34:43.957836 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-08 00:34:43.957849 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-08 00:34:43.957861 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-08 00:34:43.957874 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:34:43.957887 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-08 00:34:43.957900 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-08 00:34:43.957912 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-08 00:34:43.957924 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-08 00:34:43.957937 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:34:43.957949 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-08 00:34:43.957962 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-08 00:34:43.957975 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-08 00:34:43.958004 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-08 00:34:43.958080 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:34:43.958096 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-08 00:34:43.958109 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-08 00:34:43.958122 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-08 00:34:43.958135 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-08 00:34:43.958148 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:34:43.958161 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-08 00:34:43.958174 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-08 00:34:43.958220 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-08 00:34:43.958232 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-08 00:34:43.958243 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:34:43.958254 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-08 00:34:43.958265 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-08 00:34:43.958276 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-08 00:34:43.958287 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-08 00:34:43.958298 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:34:43.958309 | orchestrator | 2026-04-08 00:34:43.958320 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-08 00:34:43.958364 | orchestrator | Wednesday 08 April 2026 00:34:35 +0000 (0:00:00.890) 0:00:44.963 ******* 2026-04-08 00:34:43.958376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:34:43.958388 | orchestrator | 2026-04-08 00:34:43.958399 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-08 00:34:43.958410 | orchestrator | Wednesday 08 April 2026 00:34:37 +0000 (0:00:01.175) 0:00:46.138 ******* 2026-04-08 00:34:43.958421 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:34:43.958438 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:34:43.958450 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:34:43.958461 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:34:43.958472 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:34:43.958483 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:34:43.958494 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:34:43.958505 | orchestrator | 2026-04-08 00:34:43.958516 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-08 00:34:43.958527 | orchestrator | Wednesday 08 April 2026 00:34:37 +0000 (0:00:00.589) 0:00:46.727 ******* 2026-04-08 00:34:43.958538 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:34:43.958549 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:34:43.958560 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:34:43.958571 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:34:43.958582 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:34:43.958593 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:34:43.958604 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:34:43.958615 | orchestrator | 2026-04-08 00:34:43.958626 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-08 00:34:43.958637 | orchestrator | Wednesday 08 April 2026 00:34:38 +0000 (0:00:00.769) 0:00:47.496 ******* 2026-04-08 00:34:43.958648 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:34:43.958659 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:34:43.958669 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:34:43.958680 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:34:43.958691 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:34:43.958702 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:34:43.958713 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:34:43.958724 | orchestrator | 2026-04-08 00:34:43.958735 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-08 00:34:43.958746 | orchestrator | Wednesday 08 April 2026 00:34:39 +0000 (0:00:00.591) 0:00:48.087 ******* 2026-04-08 00:34:43.958757 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:34:43.958768 | orchestrator | ok: [testbed-manager] 2026-04-08 00:34:43.958779 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:34:43.958790 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:34:43.958801 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:34:43.958812 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:34:43.958823 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:34:43.958834 | orchestrator | 2026-04-08 00:34:43.958845 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-08 00:34:43.958856 | orchestrator | Wednesday 08 April 2026 00:34:40 +0000 (0:00:01.751) 0:00:49.839 ******* 2026-04-08 00:34:43.958867 | orchestrator | ok: [testbed-manager] 2026-04-08 00:34:43.958878 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:34:43.958889 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:34:43.958900 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:34:43.958911 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:34:43.958921 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:34:43.958932 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:34:43.958943 | orchestrator | 2026-04-08 00:34:43.958954 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-08 00:34:43.958966 | orchestrator | Wednesday 08 April 2026 00:34:41 +0000 (0:00:01.166) 0:00:51.006 ******* 2026-04-08 00:34:43.958983 | orchestrator | ok: [testbed-manager] 2026-04-08 00:34:43.958994 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:34:43.959005 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:34:43.959016 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:34:43.959027 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:34:43.959038 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:34:43.959048 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:34:43.959059 | orchestrator | 2026-04-08 00:34:43.959079 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-08 00:34:45.485935 | orchestrator | Wednesday 08 April 2026 00:34:43 +0000 (0:00:01.988) 0:00:52.995 ******* 2026-04-08 00:34:45.486143 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:34:45.486172 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:34:45.486194 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:34:45.486215 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:34:45.486234 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:34:45.486253 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:34:45.486273 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:34:45.486291 | orchestrator | 2026-04-08 00:34:45.486310 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-08 00:34:45.486397 | orchestrator | Wednesday 08 April 2026 00:34:44 +0000 (0:00:00.733) 0:00:53.728 ******* 2026-04-08 00:34:45.486417 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:34:45.486436 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:34:45.486455 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:34:45.486475 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:34:45.486495 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:34:45.486515 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:34:45.486532 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:34:45.486546 | orchestrator | 2026-04-08 00:34:45.486559 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:34:45.486573 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-08 00:34:45.486587 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-08 00:34:45.486601 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-08 00:34:45.486613 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-08 00:34:45.486626 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-08 00:34:45.486660 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-08 00:34:45.486679 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-08 00:34:45.486697 | orchestrator | 2026-04-08 00:34:45.486722 | orchestrator | 2026-04-08 00:34:45.486741 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:34:45.486760 | orchestrator | Wednesday 08 April 2026 00:34:45 +0000 (0:00:00.495) 0:00:54.224 ******* 2026-04-08 00:34:45.486781 | orchestrator | =============================================================================== 2026-04-08 00:34:45.486799 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.77s 2026-04-08 00:34:45.486819 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.12s 2026-04-08 00:34:45.486838 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.34s 2026-04-08 00:34:45.486885 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.99s 2026-04-08 00:34:45.486903 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.64s 2026-04-08 00:34:45.486922 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.16s 2026-04-08 00:34:45.486940 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 1.99s 2026-04-08 00:34:45.486957 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.75s 2026-04-08 00:34:45.486974 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.72s 2026-04-08 00:34:45.486990 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.68s 2026-04-08 00:34:45.487009 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.61s 2026-04-08 00:34:45.487027 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.54s 2026-04-08 00:34:45.487046 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.22s 2026-04-08 00:34:45.487064 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.22s 2026-04-08 00:34:45.487083 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.21s 2026-04-08 00:34:45.487095 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.18s 2026-04-08 00:34:45.487106 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.17s 2026-04-08 00:34:45.487117 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2026-04-08 00:34:45.487127 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.10s 2026-04-08 00:34:45.487138 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.03s 2026-04-08 00:34:45.649676 | orchestrator | + osism apply wireguard 2026-04-08 00:34:56.871468 | orchestrator | 2026-04-08 00:34:56 | INFO  | Prepare task for execution of wireguard. 2026-04-08 00:34:56.945897 | orchestrator | 2026-04-08 00:34:56 | INFO  | Task c7ffaf5a-ff0b-420f-ba4d-39eabd90f221 (wireguard) was prepared for execution. 2026-04-08 00:34:56.945975 | orchestrator | 2026-04-08 00:34:56 | INFO  | It takes a moment until task c7ffaf5a-ff0b-420f-ba4d-39eabd90f221 (wireguard) has been started and output is visible here. 2026-04-08 00:35:15.176717 | orchestrator | 2026-04-08 00:35:15.176823 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-08 00:35:15.176834 | orchestrator | 2026-04-08 00:35:15.176841 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-08 00:35:15.176847 | orchestrator | Wednesday 08 April 2026 00:35:00 +0000 (0:00:00.209) 0:00:00.209 ******* 2026-04-08 00:35:15.176855 | orchestrator | ok: [testbed-manager] 2026-04-08 00:35:15.176862 | orchestrator | 2026-04-08 00:35:15.176868 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-08 00:35:15.176875 | orchestrator | Wednesday 08 April 2026 00:35:01 +0000 (0:00:01.433) 0:00:01.642 ******* 2026-04-08 00:35:15.176882 | orchestrator | changed: [testbed-manager] 2026-04-08 00:35:15.176889 | orchestrator | 2026-04-08 00:35:15.176895 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-08 00:35:15.176901 | orchestrator | Wednesday 08 April 2026 00:35:07 +0000 (0:00:06.058) 0:00:07.700 ******* 2026-04-08 00:35:15.176907 | orchestrator | changed: [testbed-manager] 2026-04-08 00:35:15.176914 | orchestrator | 2026-04-08 00:35:15.176919 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-08 00:35:15.176926 | orchestrator | Wednesday 08 April 2026 00:35:08 +0000 (0:00:00.568) 0:00:08.269 ******* 2026-04-08 00:35:15.176932 | orchestrator | changed: [testbed-manager] 2026-04-08 00:35:15.176939 | orchestrator | 2026-04-08 00:35:15.176944 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-08 00:35:15.176951 | orchestrator | Wednesday 08 April 2026 00:35:08 +0000 (0:00:00.450) 0:00:08.719 ******* 2026-04-08 00:35:15.176956 | orchestrator | ok: [testbed-manager] 2026-04-08 00:35:15.176983 | orchestrator | 2026-04-08 00:35:15.176989 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-08 00:35:15.176996 | orchestrator | Wednesday 08 April 2026 00:35:09 +0000 (0:00:00.533) 0:00:09.253 ******* 2026-04-08 00:35:15.177002 | orchestrator | ok: [testbed-manager] 2026-04-08 00:35:15.177008 | orchestrator | 2026-04-08 00:35:15.177014 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-08 00:35:15.177020 | orchestrator | Wednesday 08 April 2026 00:35:09 +0000 (0:00:00.432) 0:00:09.685 ******* 2026-04-08 00:35:15.177026 | orchestrator | ok: [testbed-manager] 2026-04-08 00:35:15.177032 | orchestrator | 2026-04-08 00:35:15.177038 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-08 00:35:15.177044 | orchestrator | Wednesday 08 April 2026 00:35:09 +0000 (0:00:00.409) 0:00:10.094 ******* 2026-04-08 00:35:15.177050 | orchestrator | changed: [testbed-manager] 2026-04-08 00:35:15.177056 | orchestrator | 2026-04-08 00:35:15.177062 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-08 00:35:15.177068 | orchestrator | Wednesday 08 April 2026 00:35:11 +0000 (0:00:01.178) 0:00:11.273 ******* 2026-04-08 00:35:15.177074 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-08 00:35:15.177080 | orchestrator | changed: [testbed-manager] 2026-04-08 00:35:15.177086 | orchestrator | 2026-04-08 00:35:15.177091 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-08 00:35:15.177097 | orchestrator | Wednesday 08 April 2026 00:35:12 +0000 (0:00:00.960) 0:00:12.233 ******* 2026-04-08 00:35:15.177120 | orchestrator | changed: [testbed-manager] 2026-04-08 00:35:15.177126 | orchestrator | 2026-04-08 00:35:15.177132 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-08 00:35:15.177138 | orchestrator | Wednesday 08 April 2026 00:35:14 +0000 (0:00:01.953) 0:00:14.187 ******* 2026-04-08 00:35:15.177143 | orchestrator | changed: [testbed-manager] 2026-04-08 00:35:15.177149 | orchestrator | 2026-04-08 00:35:15.177155 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:35:15.177160 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:35:15.177168 | orchestrator | 2026-04-08 00:35:15.177174 | orchestrator | 2026-04-08 00:35:15.177179 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:35:15.177186 | orchestrator | Wednesday 08 April 2026 00:35:14 +0000 (0:00:00.934) 0:00:15.121 ******* 2026-04-08 00:35:15.177191 | orchestrator | =============================================================================== 2026-04-08 00:35:15.177198 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.06s 2026-04-08 00:35:15.177204 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.95s 2026-04-08 00:35:15.177210 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.43s 2026-04-08 00:35:15.177216 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.18s 2026-04-08 00:35:15.177222 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2026-04-08 00:35:15.177228 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.93s 2026-04-08 00:35:15.177234 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2026-04-08 00:35:15.177240 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2026-04-08 00:35:15.177246 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2026-04-08 00:35:15.177252 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2026-04-08 00:35:15.177258 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-04-08 00:35:15.384832 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-08 00:35:15.414690 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-08 00:35:15.414804 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-08 00:35:15.492162 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 197 0 --:--:-- --:--:-- --:--:-- 200 2026-04-08 00:35:15.504638 | orchestrator | + osism apply --environment custom workarounds 2026-04-08 00:35:16.734933 | orchestrator | 2026-04-08 00:35:16 | INFO  | Trying to run play workarounds in environment custom 2026-04-08 00:35:26.770531 | orchestrator | 2026-04-08 00:35:26 | INFO  | Prepare task for execution of workarounds. 2026-04-08 00:35:26.849581 | orchestrator | 2026-04-08 00:35:26 | INFO  | Task 573978eb-74f3-4ece-9ad1-042512ee65e6 (workarounds) was prepared for execution. 2026-04-08 00:35:26.849680 | orchestrator | 2026-04-08 00:35:26 | INFO  | It takes a moment until task 573978eb-74f3-4ece-9ad1-042512ee65e6 (workarounds) has been started and output is visible here. 2026-04-08 00:35:50.935074 | orchestrator | 2026-04-08 00:35:50.935188 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:35:50.935205 | orchestrator | 2026-04-08 00:35:50.935218 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-08 00:35:50.935229 | orchestrator | Wednesday 08 April 2026 00:35:29 +0000 (0:00:00.180) 0:00:00.180 ******* 2026-04-08 00:35:50.935241 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-08 00:35:50.935253 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-08 00:35:50.935371 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-08 00:35:50.935396 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-08 00:35:50.935410 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-08 00:35:50.935421 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-08 00:35:50.935432 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-08 00:35:50.935443 | orchestrator | 2026-04-08 00:35:50.935455 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-08 00:35:50.935466 | orchestrator | 2026-04-08 00:35:50.935478 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-08 00:35:50.935489 | orchestrator | Wednesday 08 April 2026 00:35:30 +0000 (0:00:00.700) 0:00:00.880 ******* 2026-04-08 00:35:50.935501 | orchestrator | ok: [testbed-manager] 2026-04-08 00:35:50.935513 | orchestrator | 2026-04-08 00:35:50.935541 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-08 00:35:50.935553 | orchestrator | 2026-04-08 00:35:50.935564 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-08 00:35:50.935575 | orchestrator | Wednesday 08 April 2026 00:35:33 +0000 (0:00:02.660) 0:00:03.540 ******* 2026-04-08 00:35:50.935587 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:35:50.935600 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:35:50.935613 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:35:50.935625 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:35:50.935638 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:35:50.935650 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:35:50.935663 | orchestrator | 2026-04-08 00:35:50.935676 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-08 00:35:50.935688 | orchestrator | 2026-04-08 00:35:50.935701 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-08 00:35:50.935714 | orchestrator | Wednesday 08 April 2026 00:35:35 +0000 (0:00:02.266) 0:00:05.807 ******* 2026-04-08 00:35:50.935728 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-08 00:35:50.935742 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-08 00:35:50.935754 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-08 00:35:50.935792 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-08 00:35:50.935806 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-08 00:35:50.935819 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-08 00:35:50.935831 | orchestrator | 2026-04-08 00:35:50.935844 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-08 00:35:50.935857 | orchestrator | Wednesday 08 April 2026 00:35:36 +0000 (0:00:01.266) 0:00:07.074 ******* 2026-04-08 00:35:50.935870 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:35:50.935883 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:35:50.935896 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:35:50.935908 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:35:50.935921 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:35:50.935934 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:35:50.935947 | orchestrator | 2026-04-08 00:35:50.935959 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-08 00:35:50.935973 | orchestrator | Wednesday 08 April 2026 00:35:40 +0000 (0:00:03.866) 0:00:10.940 ******* 2026-04-08 00:35:50.936028 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:35:50.936046 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:35:50.936063 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:35:50.936080 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:35:50.936098 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:35:50.936116 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:35:50.936136 | orchestrator | 2026-04-08 00:35:50.936155 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-08 00:35:50.936172 | orchestrator | 2026-04-08 00:35:50.936192 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-08 00:35:50.936203 | orchestrator | Wednesday 08 April 2026 00:35:41 +0000 (0:00:00.477) 0:00:11.417 ******* 2026-04-08 00:35:50.936214 | orchestrator | changed: [testbed-manager] 2026-04-08 00:35:50.936225 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:35:50.936236 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:35:50.936247 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:35:50.936324 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:35:50.936340 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:35:50.936351 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:35:50.936362 | orchestrator | 2026-04-08 00:35:50.936372 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-08 00:35:50.936384 | orchestrator | Wednesday 08 April 2026 00:35:42 +0000 (0:00:01.701) 0:00:13.119 ******* 2026-04-08 00:35:50.936394 | orchestrator | changed: [testbed-manager] 2026-04-08 00:35:50.936405 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:35:50.936416 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:35:50.936427 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:35:50.936438 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:35:50.936448 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:35:50.936482 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:35:50.936494 | orchestrator | 2026-04-08 00:35:50.936504 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-08 00:35:50.936515 | orchestrator | Wednesday 08 April 2026 00:35:44 +0000 (0:00:01.365) 0:00:14.485 ******* 2026-04-08 00:35:50.936526 | orchestrator | ok: [testbed-manager] 2026-04-08 00:35:50.936541 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:35:50.936560 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:35:50.936578 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:35:50.936596 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:35:50.936615 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:35:50.936635 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:35:50.936654 | orchestrator | 2026-04-08 00:35:50.936688 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-08 00:35:50.936700 | orchestrator | Wednesday 08 April 2026 00:35:45 +0000 (0:00:01.554) 0:00:16.040 ******* 2026-04-08 00:35:50.936711 | orchestrator | changed: [testbed-manager] 2026-04-08 00:35:50.936722 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:35:50.936733 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:35:50.936743 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:35:50.936754 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:35:50.936765 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:35:50.936778 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:35:50.936852 | orchestrator | 2026-04-08 00:35:50.936873 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-08 00:35:50.936890 | orchestrator | Wednesday 08 April 2026 00:35:47 +0000 (0:00:01.551) 0:00:17.592 ******* 2026-04-08 00:35:50.936901 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:35:50.936920 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:35:50.936931 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:35:50.936942 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:35:50.936953 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:35:50.936963 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:35:50.936974 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:35:50.936984 | orchestrator | 2026-04-08 00:35:50.936995 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-08 00:35:50.937006 | orchestrator | 2026-04-08 00:35:50.937017 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-08 00:35:50.937028 | orchestrator | Wednesday 08 April 2026 00:35:47 +0000 (0:00:00.618) 0:00:18.211 ******* 2026-04-08 00:35:50.937039 | orchestrator | ok: [testbed-manager] 2026-04-08 00:35:50.937050 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:35:50.937060 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:35:50.937071 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:35:50.937082 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:35:50.937093 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:35:50.937103 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:35:50.937114 | orchestrator | 2026-04-08 00:35:50.937125 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:35:50.937137 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:35:50.937150 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:35:50.937161 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:35:50.937172 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:35:50.937182 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:35:50.937193 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:35:50.937204 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:35:50.937215 | orchestrator | 2026-04-08 00:35:50.937228 | orchestrator | 2026-04-08 00:35:50.937247 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:35:50.937382 | orchestrator | Wednesday 08 April 2026 00:35:50 +0000 (0:00:02.918) 0:00:21.129 ******* 2026-04-08 00:35:50.937401 | orchestrator | =============================================================================== 2026-04-08 00:35:50.937424 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.87s 2026-04-08 00:35:50.937435 | orchestrator | Install python3-docker -------------------------------------------------- 2.92s 2026-04-08 00:35:50.937445 | orchestrator | Apply netplan configuration --------------------------------------------- 2.66s 2026-04-08 00:35:50.937456 | orchestrator | Apply netplan configuration --------------------------------------------- 2.27s 2026-04-08 00:35:50.937467 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.70s 2026-04-08 00:35:50.937478 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.56s 2026-04-08 00:35:50.937488 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.55s 2026-04-08 00:35:50.937499 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.37s 2026-04-08 00:35:50.937510 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.27s 2026-04-08 00:35:50.937520 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.70s 2026-04-08 00:35:50.937531 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.62s 2026-04-08 00:35:50.937553 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.48s 2026-04-08 00:35:51.350248 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-08 00:36:02.580051 | orchestrator | 2026-04-08 00:36:02 | INFO  | Prepare task for execution of reboot. 2026-04-08 00:36:02.659566 | orchestrator | 2026-04-08 00:36:02 | INFO  | Task 34b87784-3ea9-4829-9d3c-32674cdb4444 (reboot) was prepared for execution. 2026-04-08 00:36:02.659650 | orchestrator | 2026-04-08 00:36:02 | INFO  | It takes a moment until task 34b87784-3ea9-4829-9d3c-32674cdb4444 (reboot) has been started and output is visible here. 2026-04-08 00:36:13.443418 | orchestrator | 2026-04-08 00:36:13.443500 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-08 00:36:13.443508 | orchestrator | 2026-04-08 00:36:13.443512 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-08 00:36:13.443517 | orchestrator | Wednesday 08 April 2026 00:36:05 +0000 (0:00:00.243) 0:00:00.243 ******* 2026-04-08 00:36:13.443521 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:36:13.443526 | orchestrator | 2026-04-08 00:36:13.443530 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-08 00:36:13.443535 | orchestrator | Wednesday 08 April 2026 00:36:05 +0000 (0:00:00.152) 0:00:00.395 ******* 2026-04-08 00:36:13.443539 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:36:13.443543 | orchestrator | 2026-04-08 00:36:13.443558 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-08 00:36:13.443562 | orchestrator | Wednesday 08 April 2026 00:36:07 +0000 (0:00:01.251) 0:00:01.647 ******* 2026-04-08 00:36:13.443566 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:36:13.443570 | orchestrator | 2026-04-08 00:36:13.443574 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-08 00:36:13.443577 | orchestrator | 2026-04-08 00:36:13.443581 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-08 00:36:13.443585 | orchestrator | Wednesday 08 April 2026 00:36:07 +0000 (0:00:00.102) 0:00:01.749 ******* 2026-04-08 00:36:13.443589 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:36:13.443593 | orchestrator | 2026-04-08 00:36:13.443597 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-08 00:36:13.443601 | orchestrator | Wednesday 08 April 2026 00:36:07 +0000 (0:00:00.085) 0:00:01.835 ******* 2026-04-08 00:36:13.443604 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:36:13.443608 | orchestrator | 2026-04-08 00:36:13.443612 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-08 00:36:13.443616 | orchestrator | Wednesday 08 April 2026 00:36:08 +0000 (0:00:01.000) 0:00:02.835 ******* 2026-04-08 00:36:13.443620 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:36:13.443624 | orchestrator | 2026-04-08 00:36:13.443640 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-08 00:36:13.443644 | orchestrator | 2026-04-08 00:36:13.443648 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-08 00:36:13.443652 | orchestrator | Wednesday 08 April 2026 00:36:08 +0000 (0:00:00.109) 0:00:02.945 ******* 2026-04-08 00:36:13.443655 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:36:13.443659 | orchestrator | 2026-04-08 00:36:13.443663 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-08 00:36:13.443667 | orchestrator | Wednesday 08 April 2026 00:36:08 +0000 (0:00:00.094) 0:00:03.040 ******* 2026-04-08 00:36:13.443671 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:36:13.443674 | orchestrator | 2026-04-08 00:36:13.443678 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-08 00:36:13.443682 | orchestrator | Wednesday 08 April 2026 00:36:09 +0000 (0:00:01.046) 0:00:04.086 ******* 2026-04-08 00:36:13.443686 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:36:13.443690 | orchestrator | 2026-04-08 00:36:13.443694 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-08 00:36:13.443698 | orchestrator | 2026-04-08 00:36:13.443701 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-08 00:36:13.443705 | orchestrator | Wednesday 08 April 2026 00:36:09 +0000 (0:00:00.132) 0:00:04.219 ******* 2026-04-08 00:36:13.443709 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:36:13.443713 | orchestrator | 2026-04-08 00:36:13.443717 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-08 00:36:13.443720 | orchestrator | Wednesday 08 April 2026 00:36:09 +0000 (0:00:00.087) 0:00:04.307 ******* 2026-04-08 00:36:13.443724 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:36:13.443728 | orchestrator | 2026-04-08 00:36:13.443732 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-08 00:36:13.443736 | orchestrator | Wednesday 08 April 2026 00:36:10 +0000 (0:00:01.001) 0:00:05.309 ******* 2026-04-08 00:36:13.443739 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:36:13.443743 | orchestrator | 2026-04-08 00:36:13.443747 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-08 00:36:13.443751 | orchestrator | 2026-04-08 00:36:13.443755 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-08 00:36:13.443758 | orchestrator | Wednesday 08 April 2026 00:36:10 +0000 (0:00:00.105) 0:00:05.414 ******* 2026-04-08 00:36:13.443762 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:36:13.443766 | orchestrator | 2026-04-08 00:36:13.443770 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-08 00:36:13.443774 | orchestrator | Wednesday 08 April 2026 00:36:11 +0000 (0:00:00.165) 0:00:05.580 ******* 2026-04-08 00:36:13.443778 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:36:13.443781 | orchestrator | 2026-04-08 00:36:13.443785 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-08 00:36:13.443789 | orchestrator | Wednesday 08 April 2026 00:36:12 +0000 (0:00:01.030) 0:00:06.611 ******* 2026-04-08 00:36:13.443793 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:36:13.443797 | orchestrator | 2026-04-08 00:36:13.443800 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-08 00:36:13.443804 | orchestrator | 2026-04-08 00:36:13.443808 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-08 00:36:13.443812 | orchestrator | Wednesday 08 April 2026 00:36:12 +0000 (0:00:00.106) 0:00:06.717 ******* 2026-04-08 00:36:13.443816 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:36:13.443819 | orchestrator | 2026-04-08 00:36:13.443823 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-08 00:36:13.443827 | orchestrator | Wednesday 08 April 2026 00:36:12 +0000 (0:00:00.089) 0:00:06.807 ******* 2026-04-08 00:36:13.443831 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:36:13.443835 | orchestrator | 2026-04-08 00:36:13.443838 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-08 00:36:13.443846 | orchestrator | Wednesday 08 April 2026 00:36:13 +0000 (0:00:00.990) 0:00:07.798 ******* 2026-04-08 00:36:13.443861 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:36:13.443865 | orchestrator | 2026-04-08 00:36:13.443869 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:36:13.443873 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:36:13.443879 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:36:13.443885 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:36:13.443889 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:36:13.443893 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:36:13.443897 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:36:13.443901 | orchestrator | 2026-04-08 00:36:13.443904 | orchestrator | 2026-04-08 00:36:13.443908 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:36:13.443912 | orchestrator | Wednesday 08 April 2026 00:36:13 +0000 (0:00:00.035) 0:00:07.833 ******* 2026-04-08 00:36:13.443916 | orchestrator | =============================================================================== 2026-04-08 00:36:13.443920 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.32s 2026-04-08 00:36:13.443924 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.68s 2026-04-08 00:36:13.443927 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2026-04-08 00:36:13.553461 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-08 00:36:24.779118 | orchestrator | 2026-04-08 00:36:24 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-08 00:36:24.851080 | orchestrator | 2026-04-08 00:36:24 | INFO  | Task 6b0a35c3-8ac3-4b21-bf1a-927815280eba (wait-for-connection) was prepared for execution. 2026-04-08 00:36:24.851172 | orchestrator | 2026-04-08 00:36:24 | INFO  | It takes a moment until task 6b0a35c3-8ac3-4b21-bf1a-927815280eba (wait-for-connection) has been started and output is visible here. 2026-04-08 00:36:40.040602 | orchestrator | 2026-04-08 00:36:40.040714 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-08 00:36:40.040731 | orchestrator | 2026-04-08 00:36:40.040744 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-08 00:36:40.040755 | orchestrator | Wednesday 08 April 2026 00:36:28 +0000 (0:00:00.373) 0:00:00.374 ******* 2026-04-08 00:36:40.040767 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:36:40.040779 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:36:40.040790 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:36:40.040801 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:36:40.040812 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:36:40.040823 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:36:40.040835 | orchestrator | 2026-04-08 00:36:40.040846 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:36:40.040858 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:36:40.040871 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:36:40.040909 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:36:40.040921 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:36:40.040932 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:36:40.040942 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:36:40.040953 | orchestrator | 2026-04-08 00:36:40.040964 | orchestrator | 2026-04-08 00:36:40.040975 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:36:40.040986 | orchestrator | Wednesday 08 April 2026 00:36:39 +0000 (0:00:11.598) 0:00:11.972 ******* 2026-04-08 00:36:40.040998 | orchestrator | =============================================================================== 2026-04-08 00:36:40.041008 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.60s 2026-04-08 00:36:40.156779 | orchestrator | + osism apply hddtemp 2026-04-08 00:36:51.451010 | orchestrator | 2026-04-08 00:36:51 | INFO  | Prepare task for execution of hddtemp. 2026-04-08 00:36:51.521986 | orchestrator | 2026-04-08 00:36:51 | INFO  | Task e4f30336-34ac-44d6-9433-8d8bb949f25b (hddtemp) was prepared for execution. 2026-04-08 00:36:51.522226 | orchestrator | 2026-04-08 00:36:51 | INFO  | It takes a moment until task e4f30336-34ac-44d6-9433-8d8bb949f25b (hddtemp) has been started and output is visible here. 2026-04-08 00:37:17.899053 | orchestrator | 2026-04-08 00:37:17.899154 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-08 00:37:17.899166 | orchestrator | 2026-04-08 00:37:17.899197 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-08 00:37:17.899205 | orchestrator | Wednesday 08 April 2026 00:36:54 +0000 (0:00:00.286) 0:00:00.286 ******* 2026-04-08 00:37:17.899213 | orchestrator | ok: [testbed-manager] 2026-04-08 00:37:17.899222 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:37:17.899230 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:37:17.899237 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:37:17.899244 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:37:17.899252 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:37:17.899272 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:37:17.899280 | orchestrator | 2026-04-08 00:37:17.899287 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-08 00:37:17.899294 | orchestrator | Wednesday 08 April 2026 00:36:55 +0000 (0:00:00.603) 0:00:00.889 ******* 2026-04-08 00:37:17.899304 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:37:17.899314 | orchestrator | 2026-04-08 00:37:17.899322 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-08 00:37:17.899329 | orchestrator | Wednesday 08 April 2026 00:36:56 +0000 (0:00:01.012) 0:00:01.902 ******* 2026-04-08 00:37:17.899337 | orchestrator | ok: [testbed-manager] 2026-04-08 00:37:17.899344 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:37:17.899352 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:37:17.899359 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:37:17.899366 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:37:17.899373 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:37:17.899380 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:37:17.899388 | orchestrator | 2026-04-08 00:37:17.899395 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-08 00:37:17.899403 | orchestrator | Wednesday 08 April 2026 00:36:58 +0000 (0:00:02.346) 0:00:04.249 ******* 2026-04-08 00:37:17.899410 | orchestrator | changed: [testbed-manager] 2026-04-08 00:37:17.899438 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:37:17.899446 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:37:17.899453 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:37:17.899461 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:37:17.899468 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:37:17.899475 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:37:17.899483 | orchestrator | 2026-04-08 00:37:17.899490 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-08 00:37:17.899497 | orchestrator | Wednesday 08 April 2026 00:36:59 +0000 (0:00:00.953) 0:00:05.202 ******* 2026-04-08 00:37:17.899505 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:37:17.899512 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:37:17.899519 | orchestrator | ok: [testbed-manager] 2026-04-08 00:37:17.899526 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:37:17.899534 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:37:17.899541 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:37:17.899548 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:37:17.899555 | orchestrator | 2026-04-08 00:37:17.899563 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-08 00:37:17.899570 | orchestrator | Wednesday 08 April 2026 00:37:00 +0000 (0:00:01.334) 0:00:06.536 ******* 2026-04-08 00:37:17.899577 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:37:17.899585 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:37:17.899592 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:37:17.899599 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:37:17.899606 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:37:17.899614 | orchestrator | changed: [testbed-manager] 2026-04-08 00:37:17.899621 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:37:17.899628 | orchestrator | 2026-04-08 00:37:17.899635 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-08 00:37:17.899643 | orchestrator | Wednesday 08 April 2026 00:37:01 +0000 (0:00:00.590) 0:00:07.127 ******* 2026-04-08 00:37:17.899650 | orchestrator | changed: [testbed-manager] 2026-04-08 00:37:17.899657 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:37:17.899665 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:37:17.899672 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:37:17.899679 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:37:17.899686 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:37:17.899694 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:37:17.899701 | orchestrator | 2026-04-08 00:37:17.899709 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-08 00:37:17.899716 | orchestrator | Wednesday 08 April 2026 00:37:14 +0000 (0:00:13.482) 0:00:20.609 ******* 2026-04-08 00:37:17.899724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:37:17.899731 | orchestrator | 2026-04-08 00:37:17.899738 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-08 00:37:17.899746 | orchestrator | Wednesday 08 April 2026 00:37:16 +0000 (0:00:01.121) 0:00:21.731 ******* 2026-04-08 00:37:17.899753 | orchestrator | changed: [testbed-manager] 2026-04-08 00:37:17.899760 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:37:17.899767 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:37:17.899775 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:37:17.899782 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:37:17.899789 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:37:17.899796 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:37:17.899803 | orchestrator | 2026-04-08 00:37:17.899810 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:37:17.899818 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:37:17.899842 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:37:17.899856 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:37:17.899864 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:37:17.899875 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:37:17.899882 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:37:17.899890 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:37:17.899897 | orchestrator | 2026-04-08 00:37:17.899904 | orchestrator | 2026-04-08 00:37:17.899912 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:37:17.899919 | orchestrator | Wednesday 08 April 2026 00:37:17 +0000 (0:00:01.680) 0:00:23.412 ******* 2026-04-08 00:37:17.899927 | orchestrator | =============================================================================== 2026-04-08 00:37:17.899934 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.48s 2026-04-08 00:37:17.899941 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.35s 2026-04-08 00:37:17.899948 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.68s 2026-04-08 00:37:17.899956 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.33s 2026-04-08 00:37:17.899963 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.12s 2026-04-08 00:37:17.899970 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.01s 2026-04-08 00:37:17.899978 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.95s 2026-04-08 00:37:17.899985 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.60s 2026-04-08 00:37:17.899992 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.59s 2026-04-08 00:37:18.031646 | orchestrator | ++ semver latest 7.1.1 2026-04-08 00:37:18.086376 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-08 00:37:18.086506 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-08 00:37:18.086537 | orchestrator | + sudo systemctl restart manager.service 2026-04-08 00:37:31.963573 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-08 00:37:31.963664 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-08 00:37:31.963675 | orchestrator | + local max_attempts=60 2026-04-08 00:37:31.963683 | orchestrator | + local name=ceph-ansible 2026-04-08 00:37:31.963690 | orchestrator | + local attempt_num=1 2026-04-08 00:37:31.963697 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:37:32.003408 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:37:32.003509 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:37:32.003526 | orchestrator | + sleep 5 2026-04-08 00:37:37.007050 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:37:37.036487 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:37:37.036589 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:37:37.036603 | orchestrator | + sleep 5 2026-04-08 00:37:42.039016 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:37:42.077279 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:37:42.077376 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:37:42.077392 | orchestrator | + sleep 5 2026-04-08 00:37:47.081088 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:37:47.115076 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:37:47.115202 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:37:47.115217 | orchestrator | + sleep 5 2026-04-08 00:37:52.119318 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:37:52.156682 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:37:52.156748 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:37:52.156754 | orchestrator | + sleep 5 2026-04-08 00:37:57.160973 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:37:57.196483 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:37:57.196577 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:37:57.196590 | orchestrator | + sleep 5 2026-04-08 00:38:02.200902 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:38:02.240352 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:38:02.240455 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:38:02.240471 | orchestrator | + sleep 5 2026-04-08 00:38:07.244783 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:38:07.294230 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-08 00:38:07.294334 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:38:07.294350 | orchestrator | + sleep 5 2026-04-08 00:38:12.297100 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:38:12.327596 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-08 00:38:12.327698 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:38:12.327714 | orchestrator | + sleep 5 2026-04-08 00:38:17.332072 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:38:17.367855 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-08 00:38:17.367972 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:38:17.367995 | orchestrator | + sleep 5 2026-04-08 00:38:22.371980 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:38:22.407209 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-08 00:38:22.407314 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:38:22.407330 | orchestrator | + sleep 5 2026-04-08 00:38:27.411075 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:38:27.447516 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-08 00:38:27.447638 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:38:27.447657 | orchestrator | + sleep 5 2026-04-08 00:38:32.451851 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:38:32.486092 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-08 00:38:32.486208 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:38:32.486219 | orchestrator | + sleep 5 2026-04-08 00:38:37.489954 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:38:37.522697 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:38:37.522805 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-08 00:38:37.522824 | orchestrator | + local max_attempts=60 2026-04-08 00:38:37.522838 | orchestrator | + local name=kolla-ansible 2026-04-08 00:38:37.522851 | orchestrator | + local attempt_num=1 2026-04-08 00:38:37.523577 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-08 00:38:37.558579 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:38:37.558674 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-08 00:38:37.558690 | orchestrator | + local max_attempts=60 2026-04-08 00:38:37.558703 | orchestrator | + local name=osism-ansible 2026-04-08 00:38:37.558715 | orchestrator | + local attempt_num=1 2026-04-08 00:38:37.558726 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-08 00:38:37.589325 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:38:37.589421 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-08 00:38:37.589437 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-08 00:38:37.743673 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-08 00:38:37.902570 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-08 00:38:38.044357 | orchestrator | ARA in osism-ansible already disabled. 2026-04-08 00:38:38.192325 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-08 00:38:38.192918 | orchestrator | + osism apply gather-facts 2026-04-08 00:38:49.596306 | orchestrator | 2026-04-08 00:38:49 | INFO  | Prepare task for execution of gather-facts. 2026-04-08 00:38:49.663498 | orchestrator | 2026-04-08 00:38:49 | INFO  | Task a56b602d-0dd7-48ce-8060-025ad5bb0a78 (gather-facts) was prepared for execution. 2026-04-08 00:38:49.663605 | orchestrator | 2026-04-08 00:38:49 | INFO  | It takes a moment until task a56b602d-0dd7-48ce-8060-025ad5bb0a78 (gather-facts) has been started and output is visible here. 2026-04-08 00:39:01.475367 | orchestrator | 2026-04-08 00:39:01.475482 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-08 00:39:01.475500 | orchestrator | 2026-04-08 00:39:01.475513 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-08 00:39:01.475525 | orchestrator | Wednesday 08 April 2026 00:38:52 +0000 (0:00:00.247) 0:00:00.247 ******* 2026-04-08 00:39:01.475536 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:39:01.475549 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:39:01.475560 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:39:01.475571 | orchestrator | ok: [testbed-manager] 2026-04-08 00:39:01.475582 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:39:01.475593 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:39:01.475603 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:39:01.475614 | orchestrator | 2026-04-08 00:39:01.475626 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-08 00:39:01.475637 | orchestrator | 2026-04-08 00:39:01.475648 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-08 00:39:01.475660 | orchestrator | Wednesday 08 April 2026 00:39:00 +0000 (0:00:08.280) 0:00:08.528 ******* 2026-04-08 00:39:01.475671 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:39:01.475682 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:39:01.475693 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:39:01.475704 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:39:01.475715 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:01.475726 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:39:01.475737 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:39:01.475748 | orchestrator | 2026-04-08 00:39:01.475759 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:39:01.475790 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:39:01.475804 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:39:01.475815 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:39:01.475826 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:39:01.475837 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:39:01.475848 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:39:01.475859 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:39:01.475870 | orchestrator | 2026-04-08 00:39:01.475881 | orchestrator | 2026-04-08 00:39:01.475892 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:39:01.475904 | orchestrator | Wednesday 08 April 2026 00:39:01 +0000 (0:00:00.502) 0:00:09.031 ******* 2026-04-08 00:39:01.475917 | orchestrator | =============================================================================== 2026-04-08 00:39:01.475929 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.28s 2026-04-08 00:39:01.475942 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-04-08 00:39:01.592906 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-08 00:39:01.601685 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-08 00:39:01.618221 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-08 00:39:01.627020 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-08 00:39:01.636668 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-08 00:39:01.652971 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-08 00:39:01.662582 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-08 00:39:01.672137 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-08 00:39:01.686421 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-08 00:39:01.695585 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-08 00:39:01.709483 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-08 00:39:01.721044 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-08 00:39:01.732582 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-08 00:39:01.748122 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-08 00:39:01.762761 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-08 00:39:01.780200 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-08 00:39:01.797974 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-08 00:39:01.814117 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-08 00:39:01.833864 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-08 00:39:01.853821 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-08 00:39:01.872324 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-08 00:39:01.888783 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-08 00:39:01.906719 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-08 00:39:01.925587 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-08 00:39:02.424530 | orchestrator | ok: Runtime: 0:23:34.526824 2026-04-08 00:39:02.535696 | 2026-04-08 00:39:02.535881 | TASK [Deploy services] 2026-04-08 00:39:03.070379 | orchestrator | skipping: Conditional result was False 2026-04-08 00:39:03.087675 | 2026-04-08 00:39:03.087902 | TASK [Deploy in a nutshell] 2026-04-08 00:39:03.754049 | orchestrator | + set -e 2026-04-08 00:39:03.754267 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-08 00:39:03.754292 | orchestrator | ++ export INTERACTIVE=false 2026-04-08 00:39:03.754313 | orchestrator | ++ INTERACTIVE=false 2026-04-08 00:39:03.754327 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-08 00:39:03.754340 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-08 00:39:03.754353 | orchestrator | + source /opt/manager-vars.sh 2026-04-08 00:39:03.754398 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-08 00:39:03.754426 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-08 00:39:03.754440 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-08 00:39:03.754456 | orchestrator | ++ CEPH_VERSION=reef 2026-04-08 00:39:03.754469 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-08 00:39:03.754488 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-08 00:39:03.754514 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-08 00:39:03.754535 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-08 00:39:03.754547 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-08 00:39:03.754561 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-08 00:39:03.754572 | orchestrator | ++ export ARA=false 2026-04-08 00:39:03.755103 | orchestrator | 2026-04-08 00:39:03.755122 | orchestrator | # PULL IMAGES 2026-04-08 00:39:03.755134 | orchestrator | 2026-04-08 00:39:03.755145 | orchestrator | ++ ARA=false 2026-04-08 00:39:03.755158 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-08 00:39:03.755169 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-08 00:39:03.755180 | orchestrator | ++ export TEMPEST=true 2026-04-08 00:39:03.755191 | orchestrator | ++ TEMPEST=true 2026-04-08 00:39:03.755203 | orchestrator | ++ export IS_ZUUL=true 2026-04-08 00:39:03.755213 | orchestrator | ++ IS_ZUUL=true 2026-04-08 00:39:03.755225 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.187 2026-04-08 00:39:03.755236 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.187 2026-04-08 00:39:03.755247 | orchestrator | ++ export EXTERNAL_API=false 2026-04-08 00:39:03.755258 | orchestrator | ++ EXTERNAL_API=false 2026-04-08 00:39:03.755269 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-08 00:39:03.755281 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-08 00:39:03.755292 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-08 00:39:03.755303 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-08 00:39:03.755314 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-08 00:39:03.755333 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-08 00:39:03.755344 | orchestrator | + echo 2026-04-08 00:39:03.755356 | orchestrator | + echo '# PULL IMAGES' 2026-04-08 00:39:03.755367 | orchestrator | + echo 2026-04-08 00:39:03.755382 | orchestrator | ++ semver latest 7.0.0 2026-04-08 00:39:03.797178 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-08 00:39:03.797289 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-08 00:39:03.797311 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-08 00:39:04.892970 | orchestrator | 2026-04-08 00:39:04 | INFO  | Trying to run play pull-images in environment custom 2026-04-08 00:39:15.075638 | orchestrator | 2026-04-08 00:39:15 | INFO  | Prepare task for execution of pull-images. 2026-04-08 00:39:15.143014 | orchestrator | 2026-04-08 00:39:15 | INFO  | Task bb4575cb-b665-4118-b2fc-f4e873c30792 (pull-images) was prepared for execution. 2026-04-08 00:39:15.143111 | orchestrator | 2026-04-08 00:39:15 | INFO  | Task bb4575cb-b665-4118-b2fc-f4e873c30792 is running in background. No more output. Check ARA for logs. 2026-04-08 00:39:16.439435 | orchestrator | 2026-04-08 00:39:16 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-08 00:39:26.564693 | orchestrator | 2026-04-08 00:39:26 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-08 00:39:26.639246 | orchestrator | 2026-04-08 00:39:26 | INFO  | Task dec02216-6145-4585-83d2-9ff3051165fc (wipe-partitions) was prepared for execution. 2026-04-08 00:39:26.639360 | orchestrator | 2026-04-08 00:39:26 | INFO  | It takes a moment until task dec02216-6145-4585-83d2-9ff3051165fc (wipe-partitions) has been started and output is visible here. 2026-04-08 00:39:37.786904 | orchestrator | 2026-04-08 00:39:37.787001 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-08 00:39:37.787014 | orchestrator | 2026-04-08 00:39:37.787076 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-08 00:39:37.787092 | orchestrator | Wednesday 08 April 2026 00:39:29 +0000 (0:00:00.146) 0:00:00.146 ******* 2026-04-08 00:39:37.787123 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:39:37.787132 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:39:37.787140 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:39:37.787147 | orchestrator | 2026-04-08 00:39:37.787154 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-08 00:39:37.787162 | orchestrator | Wednesday 08 April 2026 00:39:30 +0000 (0:00:00.935) 0:00:01.081 ******* 2026-04-08 00:39:37.787172 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:37.787180 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:39:37.787187 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:39:37.787195 | orchestrator | 2026-04-08 00:39:37.787202 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-08 00:39:37.787209 | orchestrator | Wednesday 08 April 2026 00:39:30 +0000 (0:00:00.235) 0:00:01.317 ******* 2026-04-08 00:39:37.787216 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:39:37.787224 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:39:37.787231 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:39:37.787239 | orchestrator | 2026-04-08 00:39:37.787246 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-08 00:39:37.787253 | orchestrator | Wednesday 08 April 2026 00:39:31 +0000 (0:00:00.523) 0:00:01.841 ******* 2026-04-08 00:39:37.787261 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:37.787268 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:39:37.787275 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:39:37.787282 | orchestrator | 2026-04-08 00:39:37.787289 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-08 00:39:37.787296 | orchestrator | Wednesday 08 April 2026 00:39:31 +0000 (0:00:00.216) 0:00:02.057 ******* 2026-04-08 00:39:37.787304 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-08 00:39:37.787314 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-08 00:39:37.787321 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-08 00:39:37.787329 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-08 00:39:37.787336 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-08 00:39:37.787343 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-08 00:39:37.787350 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-08 00:39:37.787357 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-08 00:39:37.787364 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-08 00:39:37.787372 | orchestrator | 2026-04-08 00:39:37.787379 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-08 00:39:37.787387 | orchestrator | Wednesday 08 April 2026 00:39:32 +0000 (0:00:01.336) 0:00:03.394 ******* 2026-04-08 00:39:37.787394 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-08 00:39:37.787401 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-08 00:39:37.787409 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-08 00:39:37.787416 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-08 00:39:37.787423 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-08 00:39:37.787430 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-08 00:39:37.787437 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-08 00:39:37.787445 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-08 00:39:37.787453 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-08 00:39:37.787461 | orchestrator | 2026-04-08 00:39:37.787475 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-08 00:39:37.787484 | orchestrator | Wednesday 08 April 2026 00:39:34 +0000 (0:00:01.315) 0:00:04.710 ******* 2026-04-08 00:39:37.787492 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-08 00:39:37.787501 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-08 00:39:37.787509 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-08 00:39:37.787517 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-08 00:39:37.787532 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-08 00:39:37.787540 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-08 00:39:37.787549 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-08 00:39:37.787557 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-08 00:39:37.787566 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-08 00:39:37.787575 | orchestrator | 2026-04-08 00:39:37.787583 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-08 00:39:37.787591 | orchestrator | Wednesday 08 April 2026 00:39:36 +0000 (0:00:02.134) 0:00:06.844 ******* 2026-04-08 00:39:37.787598 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:39:37.787605 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:39:37.787612 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:39:37.787619 | orchestrator | 2026-04-08 00:39:37.787626 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-08 00:39:37.787634 | orchestrator | Wednesday 08 April 2026 00:39:36 +0000 (0:00:00.599) 0:00:07.444 ******* 2026-04-08 00:39:37.787641 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:39:37.787648 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:39:37.787655 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:39:37.787663 | orchestrator | 2026-04-08 00:39:37.787670 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:39:37.787679 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:39:37.787687 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:39:37.787709 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:39:37.787718 | orchestrator | 2026-04-08 00:39:37.787725 | orchestrator | 2026-04-08 00:39:37.787732 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:39:37.787740 | orchestrator | Wednesday 08 April 2026 00:39:37 +0000 (0:00:00.779) 0:00:08.224 ******* 2026-04-08 00:39:37.787747 | orchestrator | =============================================================================== 2026-04-08 00:39:37.787754 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.13s 2026-04-08 00:39:37.787761 | orchestrator | Check device availability ----------------------------------------------- 1.34s 2026-04-08 00:39:37.787768 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.32s 2026-04-08 00:39:37.787776 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.94s 2026-04-08 00:39:37.787783 | orchestrator | Request device events from the kernel ----------------------------------- 0.78s 2026-04-08 00:39:37.787790 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2026-04-08 00:39:37.787797 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.52s 2026-04-08 00:39:37.787804 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2026-04-08 00:39:37.787811 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.22s 2026-04-08 00:39:49.326187 | orchestrator | 2026-04-08 00:39:49 | INFO  | Prepare task for execution of facts. 2026-04-08 00:39:49.399672 | orchestrator | 2026-04-08 00:39:49 | INFO  | Task 241d8c01-85a7-44c5-bf52-b3e34d513697 (facts) was prepared for execution. 2026-04-08 00:39:49.399766 | orchestrator | 2026-04-08 00:39:49 | INFO  | It takes a moment until task 241d8c01-85a7-44c5-bf52-b3e34d513697 (facts) has been started and output is visible here. 2026-04-08 00:40:00.067137 | orchestrator | 2026-04-08 00:40:00.067243 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-08 00:40:00.067259 | orchestrator | 2026-04-08 00:40:00.067300 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-08 00:40:00.067312 | orchestrator | Wednesday 08 April 2026 00:39:52 +0000 (0:00:00.248) 0:00:00.248 ******* 2026-04-08 00:40:00.067324 | orchestrator | ok: [testbed-manager] 2026-04-08 00:40:00.067336 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:40:00.067347 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:40:00.067358 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:40:00.067369 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:40:00.067379 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:00.067390 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:00.067401 | orchestrator | 2026-04-08 00:40:00.067412 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-08 00:40:00.067423 | orchestrator | Wednesday 08 April 2026 00:39:53 +0000 (0:00:01.169) 0:00:01.417 ******* 2026-04-08 00:40:00.067434 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:40:00.067445 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:40:00.067456 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:40:00.067467 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:40:00.067477 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:00.067488 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:00.067499 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:00.067510 | orchestrator | 2026-04-08 00:40:00.067521 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-08 00:40:00.067549 | orchestrator | 2026-04-08 00:40:00.067561 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-08 00:40:00.067573 | orchestrator | Wednesday 08 April 2026 00:39:54 +0000 (0:00:01.041) 0:00:02.459 ******* 2026-04-08 00:40:00.067587 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:40:00.067600 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:40:00.067613 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:40:00.067626 | orchestrator | ok: [testbed-manager] 2026-04-08 00:40:00.067638 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:40:00.067651 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:00.067664 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:00.067675 | orchestrator | 2026-04-08 00:40:00.067686 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-08 00:40:00.067697 | orchestrator | 2026-04-08 00:40:00.067708 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-08 00:40:00.067719 | orchestrator | Wednesday 08 April 2026 00:39:59 +0000 (0:00:04.670) 0:00:07.129 ******* 2026-04-08 00:40:00.067730 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:40:00.067741 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:40:00.067752 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:40:00.067763 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:40:00.067774 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:00.067785 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:00.067796 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:00.067807 | orchestrator | 2026-04-08 00:40:00.067818 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:40:00.067830 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:40:00.067843 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:40:00.067854 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:40:00.067865 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:40:00.067876 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:40:00.067895 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:40:00.067906 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:40:00.067917 | orchestrator | 2026-04-08 00:40:00.067928 | orchestrator | 2026-04-08 00:40:00.067939 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:40:00.067950 | orchestrator | Wednesday 08 April 2026 00:39:59 +0000 (0:00:00.439) 0:00:07.569 ******* 2026-04-08 00:40:00.067961 | orchestrator | =============================================================================== 2026-04-08 00:40:00.067972 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.67s 2026-04-08 00:40:00.067983 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2026-04-08 00:40:00.067994 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.04s 2026-04-08 00:40:00.068055 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.44s 2026-04-08 00:40:01.326682 | orchestrator | 2026-04-08 00:40:01 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-08 00:40:01.388453 | orchestrator | 2026-04-08 00:40:01 | INFO  | Task e8270eaa-cd6d-4ec7-b31a-610bca25fab5 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-08 00:40:01.388561 | orchestrator | 2026-04-08 00:40:01 | INFO  | It takes a moment until task e8270eaa-cd6d-4ec7-b31a-610bca25fab5 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-08 00:40:11.770398 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-08 00:40:11.770495 | orchestrator | 2.16.14 2026-04-08 00:40:11.770510 | orchestrator | 2026-04-08 00:40:11.770524 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-08 00:40:11.770537 | orchestrator | 2026-04-08 00:40:11.770549 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-08 00:40:11.770563 | orchestrator | Wednesday 08 April 2026 00:40:05 +0000 (0:00:00.261) 0:00:00.261 ******* 2026-04-08 00:40:11.770578 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-08 00:40:11.770590 | orchestrator | 2026-04-08 00:40:11.770602 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-08 00:40:11.770613 | orchestrator | Wednesday 08 April 2026 00:40:05 +0000 (0:00:00.201) 0:00:00.462 ******* 2026-04-08 00:40:11.770626 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:40:11.770638 | orchestrator | 2026-04-08 00:40:11.770649 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:11.770735 | orchestrator | Wednesday 08 April 2026 00:40:05 +0000 (0:00:00.195) 0:00:00.658 ******* 2026-04-08 00:40:11.770761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-08 00:40:11.770774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-08 00:40:11.770786 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-08 00:40:11.770799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-08 00:40:11.770812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-08 00:40:11.770826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-08 00:40:11.770839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-08 00:40:11.770852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-08 00:40:11.770865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-08 00:40:11.770878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-08 00:40:11.770913 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-08 00:40:11.770927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-08 00:40:11.770941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-08 00:40:11.770953 | orchestrator | 2026-04-08 00:40:11.770966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:11.770979 | orchestrator | Wednesday 08 April 2026 00:40:06 +0000 (0:00:00.321) 0:00:00.979 ******* 2026-04-08 00:40:11.771019 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:11.771033 | orchestrator | 2026-04-08 00:40:11.771048 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:11.771061 | orchestrator | Wednesday 08 April 2026 00:40:06 +0000 (0:00:00.373) 0:00:01.353 ******* 2026-04-08 00:40:11.771123 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:11.771139 | orchestrator | 2026-04-08 00:40:11.771154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:11.771170 | orchestrator | Wednesday 08 April 2026 00:40:06 +0000 (0:00:00.165) 0:00:01.518 ******* 2026-04-08 00:40:11.771185 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:11.771198 | orchestrator | 2026-04-08 00:40:11.771211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:11.771224 | orchestrator | Wednesday 08 April 2026 00:40:06 +0000 (0:00:00.174) 0:00:01.692 ******* 2026-04-08 00:40:11.771238 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:11.771251 | orchestrator | 2026-04-08 00:40:11.771264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:11.771277 | orchestrator | Wednesday 08 April 2026 00:40:07 +0000 (0:00:00.159) 0:00:01.852 ******* 2026-04-08 00:40:11.771290 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:11.771304 | orchestrator | 2026-04-08 00:40:11.771317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:11.771331 | orchestrator | Wednesday 08 April 2026 00:40:07 +0000 (0:00:00.222) 0:00:02.075 ******* 2026-04-08 00:40:11.771343 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:11.771355 | orchestrator | 2026-04-08 00:40:11.771369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:11.771382 | orchestrator | Wednesday 08 April 2026 00:40:07 +0000 (0:00:00.184) 0:00:02.260 ******* 2026-04-08 00:40:11.771395 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:11.771409 | orchestrator | 2026-04-08 00:40:11.771422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:11.771434 | orchestrator | Wednesday 08 April 2026 00:40:07 +0000 (0:00:00.187) 0:00:02.447 ******* 2026-04-08 00:40:11.771448 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:11.771460 | orchestrator | 2026-04-08 00:40:11.771473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:11.771487 | orchestrator | Wednesday 08 April 2026 00:40:07 +0000 (0:00:00.206) 0:00:02.653 ******* 2026-04-08 00:40:11.771501 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d) 2026-04-08 00:40:11.771515 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d) 2026-04-08 00:40:11.771528 | orchestrator | 2026-04-08 00:40:11.771542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:11.771578 | orchestrator | Wednesday 08 April 2026 00:40:08 +0000 (0:00:00.358) 0:00:03.012 ******* 2026-04-08 00:40:11.771592 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d0f6de66-4fec-4fd7-97e2-1741dd54f232) 2026-04-08 00:40:11.771606 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d0f6de66-4fec-4fd7-97e2-1741dd54f232) 2026-04-08 00:40:11.771619 | orchestrator | 2026-04-08 00:40:11.771640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:11.771665 | orchestrator | Wednesday 08 April 2026 00:40:08 +0000 (0:00:00.419) 0:00:03.431 ******* 2026-04-08 00:40:11.771678 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7b23824a-491e-4dc1-9823-22fa2ac48d76) 2026-04-08 00:40:11.771691 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7b23824a-491e-4dc1-9823-22fa2ac48d76) 2026-04-08 00:40:11.771705 | orchestrator | 2026-04-08 00:40:11.771719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:11.771732 | orchestrator | Wednesday 08 April 2026 00:40:09 +0000 (0:00:00.501) 0:00:03.932 ******* 2026-04-08 00:40:11.771745 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a8171b98-d766-41eb-84f8-e0c6f3fec117) 2026-04-08 00:40:11.771758 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a8171b98-d766-41eb-84f8-e0c6f3fec117) 2026-04-08 00:40:11.771771 | orchestrator | 2026-04-08 00:40:11.771784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:11.771797 | orchestrator | Wednesday 08 April 2026 00:40:09 +0000 (0:00:00.543) 0:00:04.476 ******* 2026-04-08 00:40:11.771811 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-08 00:40:11.771825 | orchestrator | 2026-04-08 00:40:11.771838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:11.771851 | orchestrator | Wednesday 08 April 2026 00:40:10 +0000 (0:00:00.568) 0:00:05.045 ******* 2026-04-08 00:40:11.771863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-08 00:40:11.771876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-08 00:40:11.771889 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-08 00:40:11.771901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-08 00:40:11.771914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-08 00:40:11.771927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-08 00:40:11.771940 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-08 00:40:11.771953 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-08 00:40:11.771967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-08 00:40:11.771980 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-08 00:40:11.772013 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-08 00:40:11.772027 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-08 00:40:11.772039 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-08 00:40:11.772052 | orchestrator | 2026-04-08 00:40:11.772066 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:11.772080 | orchestrator | Wednesday 08 April 2026 00:40:10 +0000 (0:00:00.339) 0:00:05.384 ******* 2026-04-08 00:40:11.772094 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:11.772107 | orchestrator | 2026-04-08 00:40:11.772120 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:11.772133 | orchestrator | Wednesday 08 April 2026 00:40:10 +0000 (0:00:00.178) 0:00:05.562 ******* 2026-04-08 00:40:11.772146 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:11.772159 | orchestrator | 2026-04-08 00:40:11.772172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:11.772186 | orchestrator | Wednesday 08 April 2026 00:40:10 +0000 (0:00:00.173) 0:00:05.736 ******* 2026-04-08 00:40:11.772199 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:11.772220 | orchestrator | 2026-04-08 00:40:11.772234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:11.772247 | orchestrator | Wednesday 08 April 2026 00:40:11 +0000 (0:00:00.186) 0:00:05.922 ******* 2026-04-08 00:40:11.772260 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:11.772273 | orchestrator | 2026-04-08 00:40:11.772287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:11.772323 | orchestrator | Wednesday 08 April 2026 00:40:11 +0000 (0:00:00.170) 0:00:06.093 ******* 2026-04-08 00:40:11.772335 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:11.772349 | orchestrator | 2026-04-08 00:40:11.772361 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:11.772374 | orchestrator | Wednesday 08 April 2026 00:40:11 +0000 (0:00:00.165) 0:00:06.259 ******* 2026-04-08 00:40:11.772388 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:11.772401 | orchestrator | 2026-04-08 00:40:11.772414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:11.772427 | orchestrator | Wednesday 08 April 2026 00:40:11 +0000 (0:00:00.174) 0:00:06.433 ******* 2026-04-08 00:40:11.772440 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:11.772454 | orchestrator | 2026-04-08 00:40:11.772475 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:18.303886 | orchestrator | Wednesday 08 April 2026 00:40:11 +0000 (0:00:00.172) 0:00:06.606 ******* 2026-04-08 00:40:18.304046 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.304064 | orchestrator | 2026-04-08 00:40:18.304076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:18.304086 | orchestrator | Wednesday 08 April 2026 00:40:11 +0000 (0:00:00.180) 0:00:06.786 ******* 2026-04-08 00:40:18.304096 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-08 00:40:18.304107 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-08 00:40:18.304117 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-08 00:40:18.304127 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-08 00:40:18.304137 | orchestrator | 2026-04-08 00:40:18.304147 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:18.304176 | orchestrator | Wednesday 08 April 2026 00:40:12 +0000 (0:00:00.793) 0:00:07.580 ******* 2026-04-08 00:40:18.304186 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.304196 | orchestrator | 2026-04-08 00:40:18.304206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:18.304215 | orchestrator | Wednesday 08 April 2026 00:40:12 +0000 (0:00:00.176) 0:00:07.757 ******* 2026-04-08 00:40:18.304225 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.304234 | orchestrator | 2026-04-08 00:40:18.304244 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:18.304254 | orchestrator | Wednesday 08 April 2026 00:40:13 +0000 (0:00:00.174) 0:00:07.931 ******* 2026-04-08 00:40:18.304263 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.304273 | orchestrator | 2026-04-08 00:40:18.304283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:18.304292 | orchestrator | Wednesday 08 April 2026 00:40:13 +0000 (0:00:00.181) 0:00:08.113 ******* 2026-04-08 00:40:18.304302 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.304311 | orchestrator | 2026-04-08 00:40:18.304321 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-08 00:40:18.304331 | orchestrator | Wednesday 08 April 2026 00:40:13 +0000 (0:00:00.177) 0:00:08.291 ******* 2026-04-08 00:40:18.304340 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-08 00:40:18.304350 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-08 00:40:18.304359 | orchestrator | 2026-04-08 00:40:18.304369 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-08 00:40:18.304378 | orchestrator | Wednesday 08 April 2026 00:40:13 +0000 (0:00:00.144) 0:00:08.435 ******* 2026-04-08 00:40:18.304410 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.304421 | orchestrator | 2026-04-08 00:40:18.304433 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-08 00:40:18.304445 | orchestrator | Wednesday 08 April 2026 00:40:13 +0000 (0:00:00.102) 0:00:08.537 ******* 2026-04-08 00:40:18.304456 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.304467 | orchestrator | 2026-04-08 00:40:18.304478 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-08 00:40:18.304490 | orchestrator | Wednesday 08 April 2026 00:40:13 +0000 (0:00:00.115) 0:00:08.652 ******* 2026-04-08 00:40:18.304502 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.304519 | orchestrator | 2026-04-08 00:40:18.304536 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-08 00:40:18.304551 | orchestrator | Wednesday 08 April 2026 00:40:13 +0000 (0:00:00.118) 0:00:08.771 ******* 2026-04-08 00:40:18.304568 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:40:18.304584 | orchestrator | 2026-04-08 00:40:18.304599 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-08 00:40:18.304616 | orchestrator | Wednesday 08 April 2026 00:40:14 +0000 (0:00:00.105) 0:00:08.877 ******* 2026-04-08 00:40:18.304635 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bf49c8a6-5f7f-52ec-8321-922f51127285'}}) 2026-04-08 00:40:18.304653 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '42db71c5-e51d-540c-8fbe-0cd4e432c3d3'}}) 2026-04-08 00:40:18.304664 | orchestrator | 2026-04-08 00:40:18.304674 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-08 00:40:18.304683 | orchestrator | Wednesday 08 April 2026 00:40:14 +0000 (0:00:00.147) 0:00:09.025 ******* 2026-04-08 00:40:18.304694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bf49c8a6-5f7f-52ec-8321-922f51127285'}})  2026-04-08 00:40:18.304711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '42db71c5-e51d-540c-8fbe-0cd4e432c3d3'}})  2026-04-08 00:40:18.304728 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.304745 | orchestrator | 2026-04-08 00:40:18.304761 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-08 00:40:18.304777 | orchestrator | Wednesday 08 April 2026 00:40:14 +0000 (0:00:00.151) 0:00:09.176 ******* 2026-04-08 00:40:18.304794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bf49c8a6-5f7f-52ec-8321-922f51127285'}})  2026-04-08 00:40:18.304811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '42db71c5-e51d-540c-8fbe-0cd4e432c3d3'}})  2026-04-08 00:40:18.304828 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.304842 | orchestrator | 2026-04-08 00:40:18.304852 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-08 00:40:18.304861 | orchestrator | Wednesday 08 April 2026 00:40:14 +0000 (0:00:00.277) 0:00:09.454 ******* 2026-04-08 00:40:18.304871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bf49c8a6-5f7f-52ec-8321-922f51127285'}})  2026-04-08 00:40:18.304899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '42db71c5-e51d-540c-8fbe-0cd4e432c3d3'}})  2026-04-08 00:40:18.304910 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.304920 | orchestrator | 2026-04-08 00:40:18.304929 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-08 00:40:18.304939 | orchestrator | Wednesday 08 April 2026 00:40:14 +0000 (0:00:00.135) 0:00:09.590 ******* 2026-04-08 00:40:18.304948 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:40:18.304958 | orchestrator | 2026-04-08 00:40:18.304967 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-08 00:40:18.305005 | orchestrator | Wednesday 08 April 2026 00:40:14 +0000 (0:00:00.126) 0:00:09.716 ******* 2026-04-08 00:40:18.305016 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:40:18.305035 | orchestrator | 2026-04-08 00:40:18.305045 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-08 00:40:18.305055 | orchestrator | Wednesday 08 April 2026 00:40:15 +0000 (0:00:00.138) 0:00:09.855 ******* 2026-04-08 00:40:18.305064 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.305075 | orchestrator | 2026-04-08 00:40:18.305084 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-08 00:40:18.305094 | orchestrator | Wednesday 08 April 2026 00:40:15 +0000 (0:00:00.114) 0:00:09.969 ******* 2026-04-08 00:40:18.305103 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.305113 | orchestrator | 2026-04-08 00:40:18.305123 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-08 00:40:18.305132 | orchestrator | Wednesday 08 April 2026 00:40:15 +0000 (0:00:00.119) 0:00:10.089 ******* 2026-04-08 00:40:18.305142 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.305152 | orchestrator | 2026-04-08 00:40:18.305168 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-08 00:40:18.305184 | orchestrator | Wednesday 08 April 2026 00:40:15 +0000 (0:00:00.120) 0:00:10.210 ******* 2026-04-08 00:40:18.305201 | orchestrator | ok: [testbed-node-3] => { 2026-04-08 00:40:18.305218 | orchestrator |  "ceph_osd_devices": { 2026-04-08 00:40:18.305234 | orchestrator |  "sdb": { 2026-04-08 00:40:18.305248 | orchestrator |  "osd_lvm_uuid": "bf49c8a6-5f7f-52ec-8321-922f51127285" 2026-04-08 00:40:18.305258 | orchestrator |  }, 2026-04-08 00:40:18.305267 | orchestrator |  "sdc": { 2026-04-08 00:40:18.305277 | orchestrator |  "osd_lvm_uuid": "42db71c5-e51d-540c-8fbe-0cd4e432c3d3" 2026-04-08 00:40:18.305287 | orchestrator |  } 2026-04-08 00:40:18.305296 | orchestrator |  } 2026-04-08 00:40:18.305306 | orchestrator | } 2026-04-08 00:40:18.305316 | orchestrator | 2026-04-08 00:40:18.305326 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-08 00:40:18.305335 | orchestrator | Wednesday 08 April 2026 00:40:15 +0000 (0:00:00.128) 0:00:10.338 ******* 2026-04-08 00:40:18.305345 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.305355 | orchestrator | 2026-04-08 00:40:18.305364 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-08 00:40:18.305374 | orchestrator | Wednesday 08 April 2026 00:40:15 +0000 (0:00:00.104) 0:00:10.442 ******* 2026-04-08 00:40:18.305384 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.305393 | orchestrator | 2026-04-08 00:40:18.305403 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-08 00:40:18.305413 | orchestrator | Wednesday 08 April 2026 00:40:15 +0000 (0:00:00.114) 0:00:10.557 ******* 2026-04-08 00:40:18.305422 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:18.305432 | orchestrator | 2026-04-08 00:40:18.305441 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-08 00:40:18.305451 | orchestrator | Wednesday 08 April 2026 00:40:15 +0000 (0:00:00.113) 0:00:10.671 ******* 2026-04-08 00:40:18.305460 | orchestrator | changed: [testbed-node-3] => { 2026-04-08 00:40:18.305470 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-08 00:40:18.305480 | orchestrator |  "ceph_osd_devices": { 2026-04-08 00:40:18.305490 | orchestrator |  "sdb": { 2026-04-08 00:40:18.305499 | orchestrator |  "osd_lvm_uuid": "bf49c8a6-5f7f-52ec-8321-922f51127285" 2026-04-08 00:40:18.305509 | orchestrator |  }, 2026-04-08 00:40:18.305519 | orchestrator |  "sdc": { 2026-04-08 00:40:18.305528 | orchestrator |  "osd_lvm_uuid": "42db71c5-e51d-540c-8fbe-0cd4e432c3d3" 2026-04-08 00:40:18.305538 | orchestrator |  } 2026-04-08 00:40:18.305548 | orchestrator |  }, 2026-04-08 00:40:18.305557 | orchestrator |  "lvm_volumes": [ 2026-04-08 00:40:18.305567 | orchestrator |  { 2026-04-08 00:40:18.305577 | orchestrator |  "data": "osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285", 2026-04-08 00:40:18.305586 | orchestrator |  "data_vg": "ceph-bf49c8a6-5f7f-52ec-8321-922f51127285" 2026-04-08 00:40:18.305603 | orchestrator |  }, 2026-04-08 00:40:18.305613 | orchestrator |  { 2026-04-08 00:40:18.305622 | orchestrator |  "data": "osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3", 2026-04-08 00:40:18.305632 | orchestrator |  "data_vg": "ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3" 2026-04-08 00:40:18.305641 | orchestrator |  } 2026-04-08 00:40:18.305651 | orchestrator |  ] 2026-04-08 00:40:18.305661 | orchestrator |  } 2026-04-08 00:40:18.305670 | orchestrator | } 2026-04-08 00:40:18.305680 | orchestrator | 2026-04-08 00:40:18.305690 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-08 00:40:18.305699 | orchestrator | Wednesday 08 April 2026 00:40:16 +0000 (0:00:00.174) 0:00:10.846 ******* 2026-04-08 00:40:18.305709 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-08 00:40:18.305718 | orchestrator | 2026-04-08 00:40:18.305728 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-08 00:40:18.305738 | orchestrator | 2026-04-08 00:40:18.305747 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-08 00:40:18.305757 | orchestrator | Wednesday 08 April 2026 00:40:17 +0000 (0:00:01.854) 0:00:12.700 ******* 2026-04-08 00:40:18.305769 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-08 00:40:18.305786 | orchestrator | 2026-04-08 00:40:18.305802 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-08 00:40:18.305819 | orchestrator | Wednesday 08 April 2026 00:40:18 +0000 (0:00:00.219) 0:00:12.919 ******* 2026-04-08 00:40:18.305837 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:18.305852 | orchestrator | 2026-04-08 00:40:18.305875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:25.185627 | orchestrator | Wednesday 08 April 2026 00:40:18 +0000 (0:00:00.220) 0:00:13.140 ******* 2026-04-08 00:40:25.185736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-08 00:40:25.185753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-08 00:40:25.185765 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-08 00:40:25.185777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-08 00:40:25.185788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-08 00:40:25.185799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-08 00:40:25.185810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-08 00:40:25.185825 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-08 00:40:25.185837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-08 00:40:25.185849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-08 00:40:25.185860 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-08 00:40:25.185871 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-08 00:40:25.185901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-08 00:40:25.185913 | orchestrator | 2026-04-08 00:40:25.185925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:25.185936 | orchestrator | Wednesday 08 April 2026 00:40:18 +0000 (0:00:00.323) 0:00:13.463 ******* 2026-04-08 00:40:25.185947 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.185958 | orchestrator | 2026-04-08 00:40:25.186047 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:25.186062 | orchestrator | Wednesday 08 April 2026 00:40:18 +0000 (0:00:00.182) 0:00:13.646 ******* 2026-04-08 00:40:25.186097 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.186108 | orchestrator | 2026-04-08 00:40:25.186120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:25.186130 | orchestrator | Wednesday 08 April 2026 00:40:18 +0000 (0:00:00.176) 0:00:13.822 ******* 2026-04-08 00:40:25.186141 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.186152 | orchestrator | 2026-04-08 00:40:25.186163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:25.186176 | orchestrator | Wednesday 08 April 2026 00:40:19 +0000 (0:00:00.167) 0:00:13.989 ******* 2026-04-08 00:40:25.186188 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.186200 | orchestrator | 2026-04-08 00:40:25.186212 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:25.186225 | orchestrator | Wednesday 08 April 2026 00:40:19 +0000 (0:00:00.171) 0:00:14.161 ******* 2026-04-08 00:40:25.186238 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.186250 | orchestrator | 2026-04-08 00:40:25.186262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:25.186274 | orchestrator | Wednesday 08 April 2026 00:40:19 +0000 (0:00:00.419) 0:00:14.580 ******* 2026-04-08 00:40:25.186287 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.186299 | orchestrator | 2026-04-08 00:40:25.186311 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:25.186323 | orchestrator | Wednesday 08 April 2026 00:40:19 +0000 (0:00:00.171) 0:00:14.752 ******* 2026-04-08 00:40:25.186336 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.186348 | orchestrator | 2026-04-08 00:40:25.186360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:25.186373 | orchestrator | Wednesday 08 April 2026 00:40:20 +0000 (0:00:00.179) 0:00:14.932 ******* 2026-04-08 00:40:25.186385 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.186398 | orchestrator | 2026-04-08 00:40:25.186410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:25.186422 | orchestrator | Wednesday 08 April 2026 00:40:20 +0000 (0:00:00.185) 0:00:15.117 ******* 2026-04-08 00:40:25.186435 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda) 2026-04-08 00:40:25.186449 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda) 2026-04-08 00:40:25.186461 | orchestrator | 2026-04-08 00:40:25.186474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:25.186487 | orchestrator | Wednesday 08 April 2026 00:40:20 +0000 (0:00:00.357) 0:00:15.474 ******* 2026-04-08 00:40:25.186499 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_706accd8-4e49-4054-bb21-fde08475a707) 2026-04-08 00:40:25.186512 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_706accd8-4e49-4054-bb21-fde08475a707) 2026-04-08 00:40:25.186525 | orchestrator | 2026-04-08 00:40:25.186535 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:25.186546 | orchestrator | Wednesday 08 April 2026 00:40:21 +0000 (0:00:00.375) 0:00:15.849 ******* 2026-04-08 00:40:25.186557 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f8a75de5-2ee8-4f26-b825-06a074879466) 2026-04-08 00:40:25.186568 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f8a75de5-2ee8-4f26-b825-06a074879466) 2026-04-08 00:40:25.186579 | orchestrator | 2026-04-08 00:40:25.186590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:25.186619 | orchestrator | Wednesday 08 April 2026 00:40:21 +0000 (0:00:00.371) 0:00:16.221 ******* 2026-04-08 00:40:25.186630 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5c872331-8a67-44e1-93cf-3b447520d047) 2026-04-08 00:40:25.186641 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5c872331-8a67-44e1-93cf-3b447520d047) 2026-04-08 00:40:25.186652 | orchestrator | 2026-04-08 00:40:25.186671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:25.186682 | orchestrator | Wednesday 08 April 2026 00:40:21 +0000 (0:00:00.379) 0:00:16.600 ******* 2026-04-08 00:40:25.186693 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-08 00:40:25.186703 | orchestrator | 2026-04-08 00:40:25.186714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:25.186725 | orchestrator | Wednesday 08 April 2026 00:40:22 +0000 (0:00:00.364) 0:00:16.964 ******* 2026-04-08 00:40:25.186735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-08 00:40:25.186746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-08 00:40:25.186763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-08 00:40:25.186774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-08 00:40:25.186785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-08 00:40:25.186795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-08 00:40:25.186806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-08 00:40:25.186816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-08 00:40:25.186827 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-08 00:40:25.186837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-08 00:40:25.186848 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-08 00:40:25.186859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-08 00:40:25.186869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-08 00:40:25.186880 | orchestrator | 2026-04-08 00:40:25.186891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:25.186901 | orchestrator | Wednesday 08 April 2026 00:40:22 +0000 (0:00:00.367) 0:00:17.332 ******* 2026-04-08 00:40:25.186912 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.186923 | orchestrator | 2026-04-08 00:40:25.186934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:25.186944 | orchestrator | Wednesday 08 April 2026 00:40:22 +0000 (0:00:00.186) 0:00:17.518 ******* 2026-04-08 00:40:25.186955 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.186966 | orchestrator | 2026-04-08 00:40:25.187002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:25.187013 | orchestrator | Wednesday 08 April 2026 00:40:23 +0000 (0:00:00.576) 0:00:18.095 ******* 2026-04-08 00:40:25.187023 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.187034 | orchestrator | 2026-04-08 00:40:25.187045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:25.187055 | orchestrator | Wednesday 08 April 2026 00:40:23 +0000 (0:00:00.193) 0:00:18.288 ******* 2026-04-08 00:40:25.187066 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.187077 | orchestrator | 2026-04-08 00:40:25.187087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:25.187098 | orchestrator | Wednesday 08 April 2026 00:40:23 +0000 (0:00:00.183) 0:00:18.472 ******* 2026-04-08 00:40:25.187109 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.187119 | orchestrator | 2026-04-08 00:40:25.187130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:25.187141 | orchestrator | Wednesday 08 April 2026 00:40:23 +0000 (0:00:00.248) 0:00:18.720 ******* 2026-04-08 00:40:25.187151 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.187169 | orchestrator | 2026-04-08 00:40:25.187180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:25.187190 | orchestrator | Wednesday 08 April 2026 00:40:24 +0000 (0:00:00.178) 0:00:18.898 ******* 2026-04-08 00:40:25.187201 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.187212 | orchestrator | 2026-04-08 00:40:25.187222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:25.187233 | orchestrator | Wednesday 08 April 2026 00:40:24 +0000 (0:00:00.198) 0:00:19.097 ******* 2026-04-08 00:40:25.187244 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:25.187255 | orchestrator | 2026-04-08 00:40:25.187265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:25.187276 | orchestrator | Wednesday 08 April 2026 00:40:24 +0000 (0:00:00.184) 0:00:19.281 ******* 2026-04-08 00:40:25.187287 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-08 00:40:25.187298 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-08 00:40:25.187308 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-08 00:40:25.187319 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-08 00:40:25.187330 | orchestrator | 2026-04-08 00:40:25.187341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:25.187351 | orchestrator | Wednesday 08 April 2026 00:40:25 +0000 (0:00:00.628) 0:00:19.910 ******* 2026-04-08 00:40:25.187362 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.599473 | orchestrator | 2026-04-08 00:40:30.599556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:30.599567 | orchestrator | Wednesday 08 April 2026 00:40:25 +0000 (0:00:00.188) 0:00:20.099 ******* 2026-04-08 00:40:30.599575 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.599580 | orchestrator | 2026-04-08 00:40:30.599585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:30.599589 | orchestrator | Wednesday 08 April 2026 00:40:25 +0000 (0:00:00.185) 0:00:20.284 ******* 2026-04-08 00:40:30.599593 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.599597 | orchestrator | 2026-04-08 00:40:30.599601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:30.599605 | orchestrator | Wednesday 08 April 2026 00:40:25 +0000 (0:00:00.186) 0:00:20.470 ******* 2026-04-08 00:40:30.599609 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.599613 | orchestrator | 2026-04-08 00:40:30.599617 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-08 00:40:30.599620 | orchestrator | Wednesday 08 April 2026 00:40:25 +0000 (0:00:00.195) 0:00:20.666 ******* 2026-04-08 00:40:30.599624 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-08 00:40:30.599628 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-08 00:40:30.599632 | orchestrator | 2026-04-08 00:40:30.599636 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-08 00:40:30.599654 | orchestrator | Wednesday 08 April 2026 00:40:26 +0000 (0:00:00.360) 0:00:21.026 ******* 2026-04-08 00:40:30.599658 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.599662 | orchestrator | 2026-04-08 00:40:30.599666 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-08 00:40:30.599669 | orchestrator | Wednesday 08 April 2026 00:40:26 +0000 (0:00:00.134) 0:00:21.160 ******* 2026-04-08 00:40:30.599673 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.599677 | orchestrator | 2026-04-08 00:40:30.599681 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-08 00:40:30.599687 | orchestrator | Wednesday 08 April 2026 00:40:26 +0000 (0:00:00.170) 0:00:21.331 ******* 2026-04-08 00:40:30.599691 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.599694 | orchestrator | 2026-04-08 00:40:30.599698 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-08 00:40:30.599702 | orchestrator | Wednesday 08 April 2026 00:40:26 +0000 (0:00:00.136) 0:00:21.467 ******* 2026-04-08 00:40:30.599721 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:30.599726 | orchestrator | 2026-04-08 00:40:30.599730 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-08 00:40:30.599733 | orchestrator | Wednesday 08 April 2026 00:40:26 +0000 (0:00:00.142) 0:00:21.610 ******* 2026-04-08 00:40:30.599738 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '31d7fbda-737c-5413-835b-7dea8c782162'}}) 2026-04-08 00:40:30.599742 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6d74f3d8-bff6-5917-9df4-f8420d533035'}}) 2026-04-08 00:40:30.599746 | orchestrator | 2026-04-08 00:40:30.599750 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-08 00:40:30.599754 | orchestrator | Wednesday 08 April 2026 00:40:26 +0000 (0:00:00.155) 0:00:21.766 ******* 2026-04-08 00:40:30.599759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '31d7fbda-737c-5413-835b-7dea8c782162'}})  2026-04-08 00:40:30.599767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6d74f3d8-bff6-5917-9df4-f8420d533035'}})  2026-04-08 00:40:30.599773 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.599779 | orchestrator | 2026-04-08 00:40:30.599785 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-08 00:40:30.599791 | orchestrator | Wednesday 08 April 2026 00:40:27 +0000 (0:00:00.131) 0:00:21.898 ******* 2026-04-08 00:40:30.599798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '31d7fbda-737c-5413-835b-7dea8c782162'}})  2026-04-08 00:40:30.599804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6d74f3d8-bff6-5917-9df4-f8420d533035'}})  2026-04-08 00:40:30.599810 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.599814 | orchestrator | 2026-04-08 00:40:30.599818 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-08 00:40:30.599822 | orchestrator | Wednesday 08 April 2026 00:40:27 +0000 (0:00:00.135) 0:00:22.033 ******* 2026-04-08 00:40:30.599826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '31d7fbda-737c-5413-835b-7dea8c782162'}})  2026-04-08 00:40:30.599830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6d74f3d8-bff6-5917-9df4-f8420d533035'}})  2026-04-08 00:40:30.599833 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.599837 | orchestrator | 2026-04-08 00:40:30.599844 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-08 00:40:30.599850 | orchestrator | Wednesday 08 April 2026 00:40:27 +0000 (0:00:00.134) 0:00:22.168 ******* 2026-04-08 00:40:30.599856 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:30.599862 | orchestrator | 2026-04-08 00:40:30.599868 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-08 00:40:30.599873 | orchestrator | Wednesday 08 April 2026 00:40:27 +0000 (0:00:00.116) 0:00:22.284 ******* 2026-04-08 00:40:30.599879 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:30.599885 | orchestrator | 2026-04-08 00:40:30.599891 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-08 00:40:30.599897 | orchestrator | Wednesday 08 April 2026 00:40:27 +0000 (0:00:00.118) 0:00:22.402 ******* 2026-04-08 00:40:30.599916 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.599923 | orchestrator | 2026-04-08 00:40:30.599929 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-08 00:40:30.599935 | orchestrator | Wednesday 08 April 2026 00:40:27 +0000 (0:00:00.124) 0:00:22.526 ******* 2026-04-08 00:40:30.599941 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.599948 | orchestrator | 2026-04-08 00:40:30.599954 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-08 00:40:30.599960 | orchestrator | Wednesday 08 April 2026 00:40:27 +0000 (0:00:00.263) 0:00:22.790 ******* 2026-04-08 00:40:30.599984 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.599996 | orchestrator | 2026-04-08 00:40:30.600003 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-08 00:40:30.600008 | orchestrator | Wednesday 08 April 2026 00:40:28 +0000 (0:00:00.130) 0:00:22.920 ******* 2026-04-08 00:40:30.600015 | orchestrator | ok: [testbed-node-4] => { 2026-04-08 00:40:30.600021 | orchestrator |  "ceph_osd_devices": { 2026-04-08 00:40:30.600027 | orchestrator |  "sdb": { 2026-04-08 00:40:30.600034 | orchestrator |  "osd_lvm_uuid": "31d7fbda-737c-5413-835b-7dea8c782162" 2026-04-08 00:40:30.600040 | orchestrator |  }, 2026-04-08 00:40:30.600048 | orchestrator |  "sdc": { 2026-04-08 00:40:30.600054 | orchestrator |  "osd_lvm_uuid": "6d74f3d8-bff6-5917-9df4-f8420d533035" 2026-04-08 00:40:30.600060 | orchestrator |  } 2026-04-08 00:40:30.600067 | orchestrator |  } 2026-04-08 00:40:30.600073 | orchestrator | } 2026-04-08 00:40:30.600080 | orchestrator | 2026-04-08 00:40:30.600086 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-08 00:40:30.600092 | orchestrator | Wednesday 08 April 2026 00:40:28 +0000 (0:00:00.109) 0:00:23.029 ******* 2026-04-08 00:40:30.600098 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.600105 | orchestrator | 2026-04-08 00:40:30.600111 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-08 00:40:30.600118 | orchestrator | Wednesday 08 April 2026 00:40:28 +0000 (0:00:00.103) 0:00:23.132 ******* 2026-04-08 00:40:30.600124 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.600130 | orchestrator | 2026-04-08 00:40:30.600137 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-08 00:40:30.600143 | orchestrator | Wednesday 08 April 2026 00:40:28 +0000 (0:00:00.104) 0:00:23.236 ******* 2026-04-08 00:40:30.600149 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:30.600156 | orchestrator | 2026-04-08 00:40:30.600162 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-08 00:40:30.600174 | orchestrator | Wednesday 08 April 2026 00:40:28 +0000 (0:00:00.104) 0:00:23.341 ******* 2026-04-08 00:40:30.600180 | orchestrator | changed: [testbed-node-4] => { 2026-04-08 00:40:30.600187 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-08 00:40:30.600193 | orchestrator |  "ceph_osd_devices": { 2026-04-08 00:40:30.600200 | orchestrator |  "sdb": { 2026-04-08 00:40:30.600206 | orchestrator |  "osd_lvm_uuid": "31d7fbda-737c-5413-835b-7dea8c782162" 2026-04-08 00:40:30.600212 | orchestrator |  }, 2026-04-08 00:40:30.600219 | orchestrator |  "sdc": { 2026-04-08 00:40:30.600226 | orchestrator |  "osd_lvm_uuid": "6d74f3d8-bff6-5917-9df4-f8420d533035" 2026-04-08 00:40:30.600232 | orchestrator |  } 2026-04-08 00:40:30.600238 | orchestrator |  }, 2026-04-08 00:40:30.600245 | orchestrator |  "lvm_volumes": [ 2026-04-08 00:40:30.600251 | orchestrator |  { 2026-04-08 00:40:30.600258 | orchestrator |  "data": "osd-block-31d7fbda-737c-5413-835b-7dea8c782162", 2026-04-08 00:40:30.600265 | orchestrator |  "data_vg": "ceph-31d7fbda-737c-5413-835b-7dea8c782162" 2026-04-08 00:40:30.600271 | orchestrator |  }, 2026-04-08 00:40:30.600277 | orchestrator |  { 2026-04-08 00:40:30.600283 | orchestrator |  "data": "osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035", 2026-04-08 00:40:30.600289 | orchestrator |  "data_vg": "ceph-6d74f3d8-bff6-5917-9df4-f8420d533035" 2026-04-08 00:40:30.600296 | orchestrator |  } 2026-04-08 00:40:30.600303 | orchestrator |  ] 2026-04-08 00:40:30.600310 | orchestrator |  } 2026-04-08 00:40:30.600317 | orchestrator | } 2026-04-08 00:40:30.600324 | orchestrator | 2026-04-08 00:40:30.600330 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-08 00:40:30.600336 | orchestrator | Wednesday 08 April 2026 00:40:28 +0000 (0:00:00.173) 0:00:23.515 ******* 2026-04-08 00:40:30.600343 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-08 00:40:30.600350 | orchestrator | 2026-04-08 00:40:30.600362 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-08 00:40:30.600368 | orchestrator | 2026-04-08 00:40:30.600375 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-08 00:40:30.600381 | orchestrator | Wednesday 08 April 2026 00:40:29 +0000 (0:00:00.939) 0:00:24.455 ******* 2026-04-08 00:40:30.600388 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-08 00:40:30.600394 | orchestrator | 2026-04-08 00:40:30.600401 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-08 00:40:30.600407 | orchestrator | Wednesday 08 April 2026 00:40:29 +0000 (0:00:00.354) 0:00:24.809 ******* 2026-04-08 00:40:30.600414 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:30.600420 | orchestrator | 2026-04-08 00:40:30.600426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:30.600433 | orchestrator | Wednesday 08 April 2026 00:40:30 +0000 (0:00:00.409) 0:00:25.218 ******* 2026-04-08 00:40:30.600439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-08 00:40:30.600445 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-08 00:40:30.600452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-08 00:40:30.600458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-08 00:40:30.600464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-08 00:40:30.600475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-08 00:40:37.421023 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-08 00:40:37.421126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-08 00:40:37.421140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-08 00:40:37.421152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-08 00:40:37.421163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-08 00:40:37.421173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-08 00:40:37.421184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-08 00:40:37.421195 | orchestrator | 2026-04-08 00:40:37.421207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:37.421219 | orchestrator | Wednesday 08 April 2026 00:40:30 +0000 (0:00:00.288) 0:00:25.507 ******* 2026-04-08 00:40:37.421230 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.421241 | orchestrator | 2026-04-08 00:40:37.421252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:37.421263 | orchestrator | Wednesday 08 April 2026 00:40:30 +0000 (0:00:00.154) 0:00:25.662 ******* 2026-04-08 00:40:37.421274 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.421284 | orchestrator | 2026-04-08 00:40:37.421295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:37.421306 | orchestrator | Wednesday 08 April 2026 00:40:30 +0000 (0:00:00.137) 0:00:25.799 ******* 2026-04-08 00:40:37.421316 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.421327 | orchestrator | 2026-04-08 00:40:37.421338 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:37.421348 | orchestrator | Wednesday 08 April 2026 00:40:31 +0000 (0:00:00.237) 0:00:26.036 ******* 2026-04-08 00:40:37.421359 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.421370 | orchestrator | 2026-04-08 00:40:37.421381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:37.421392 | orchestrator | Wednesday 08 April 2026 00:40:31 +0000 (0:00:00.193) 0:00:26.230 ******* 2026-04-08 00:40:37.421427 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.421438 | orchestrator | 2026-04-08 00:40:37.421449 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:37.421460 | orchestrator | Wednesday 08 April 2026 00:40:31 +0000 (0:00:00.158) 0:00:26.388 ******* 2026-04-08 00:40:37.421470 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.421481 | orchestrator | 2026-04-08 00:40:37.421492 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:37.421502 | orchestrator | Wednesday 08 April 2026 00:40:31 +0000 (0:00:00.184) 0:00:26.573 ******* 2026-04-08 00:40:37.421513 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.421524 | orchestrator | 2026-04-08 00:40:37.421536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:37.421549 | orchestrator | Wednesday 08 April 2026 00:40:31 +0000 (0:00:00.124) 0:00:26.698 ******* 2026-04-08 00:40:37.421562 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.421574 | orchestrator | 2026-04-08 00:40:37.421587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:37.421600 | orchestrator | Wednesday 08 April 2026 00:40:31 +0000 (0:00:00.133) 0:00:26.831 ******* 2026-04-08 00:40:37.421612 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb) 2026-04-08 00:40:37.421625 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb) 2026-04-08 00:40:37.421637 | orchestrator | 2026-04-08 00:40:37.421650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:37.421662 | orchestrator | Wednesday 08 April 2026 00:40:32 +0000 (0:00:00.435) 0:00:27.266 ******* 2026-04-08 00:40:37.421692 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bf03eb4f-be44-4071-9b80-940b5dcac70f) 2026-04-08 00:40:37.421705 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bf03eb4f-be44-4071-9b80-940b5dcac70f) 2026-04-08 00:40:37.421718 | orchestrator | 2026-04-08 00:40:37.421732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:37.421745 | orchestrator | Wednesday 08 April 2026 00:40:33 +0000 (0:00:00.612) 0:00:27.879 ******* 2026-04-08 00:40:37.421755 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6d0a5819-af6a-4d5a-b5d8-55d4de9ca567) 2026-04-08 00:40:37.421766 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6d0a5819-af6a-4d5a-b5d8-55d4de9ca567) 2026-04-08 00:40:37.421777 | orchestrator | 2026-04-08 00:40:37.421788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:37.421799 | orchestrator | Wednesday 08 April 2026 00:40:33 +0000 (0:00:00.331) 0:00:28.210 ******* 2026-04-08 00:40:37.421809 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0911be4c-6cd6-4ed2-95f2-3749c0002df5) 2026-04-08 00:40:37.421820 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0911be4c-6cd6-4ed2-95f2-3749c0002df5) 2026-04-08 00:40:37.421830 | orchestrator | 2026-04-08 00:40:37.421841 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:37.421852 | orchestrator | Wednesday 08 April 2026 00:40:33 +0000 (0:00:00.352) 0:00:28.563 ******* 2026-04-08 00:40:37.421862 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-08 00:40:37.421873 | orchestrator | 2026-04-08 00:40:37.421884 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:37.421911 | orchestrator | Wednesday 08 April 2026 00:40:34 +0000 (0:00:00.285) 0:00:28.848 ******* 2026-04-08 00:40:37.421922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-08 00:40:37.421933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-08 00:40:37.421944 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-08 00:40:37.421977 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-08 00:40:37.421995 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-08 00:40:37.422005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-08 00:40:37.422064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-08 00:40:37.422076 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-08 00:40:37.422087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-08 00:40:37.422098 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-08 00:40:37.422109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-08 00:40:37.422119 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-08 00:40:37.422130 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-08 00:40:37.422141 | orchestrator | 2026-04-08 00:40:37.422152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:37.422163 | orchestrator | Wednesday 08 April 2026 00:40:34 +0000 (0:00:00.340) 0:00:29.189 ******* 2026-04-08 00:40:37.422174 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.422185 | orchestrator | 2026-04-08 00:40:37.422195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:37.422206 | orchestrator | Wednesday 08 April 2026 00:40:34 +0000 (0:00:00.169) 0:00:29.359 ******* 2026-04-08 00:40:37.422217 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.422228 | orchestrator | 2026-04-08 00:40:37.422238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:37.422249 | orchestrator | Wednesday 08 April 2026 00:40:34 +0000 (0:00:00.190) 0:00:29.550 ******* 2026-04-08 00:40:37.422260 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.422271 | orchestrator | 2026-04-08 00:40:37.422281 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:37.422292 | orchestrator | Wednesday 08 April 2026 00:40:34 +0000 (0:00:00.178) 0:00:29.729 ******* 2026-04-08 00:40:37.422303 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.422314 | orchestrator | 2026-04-08 00:40:37.422324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:37.422335 | orchestrator | Wednesday 08 April 2026 00:40:35 +0000 (0:00:00.176) 0:00:29.905 ******* 2026-04-08 00:40:37.422346 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.422356 | orchestrator | 2026-04-08 00:40:37.422367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:37.422378 | orchestrator | Wednesday 08 April 2026 00:40:35 +0000 (0:00:00.198) 0:00:30.104 ******* 2026-04-08 00:40:37.422389 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.422400 | orchestrator | 2026-04-08 00:40:37.422410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:37.422421 | orchestrator | Wednesday 08 April 2026 00:40:35 +0000 (0:00:00.443) 0:00:30.547 ******* 2026-04-08 00:40:37.422432 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.422442 | orchestrator | 2026-04-08 00:40:37.422453 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:37.422464 | orchestrator | Wednesday 08 April 2026 00:40:35 +0000 (0:00:00.206) 0:00:30.753 ******* 2026-04-08 00:40:37.422475 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.422485 | orchestrator | 2026-04-08 00:40:37.422496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:37.422507 | orchestrator | Wednesday 08 April 2026 00:40:36 +0000 (0:00:00.197) 0:00:30.951 ******* 2026-04-08 00:40:37.422517 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-08 00:40:37.422535 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-08 00:40:37.422546 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-08 00:40:37.422557 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-08 00:40:37.422568 | orchestrator | 2026-04-08 00:40:37.422578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:37.422589 | orchestrator | Wednesday 08 April 2026 00:40:36 +0000 (0:00:00.562) 0:00:31.514 ******* 2026-04-08 00:40:37.422600 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.422611 | orchestrator | 2026-04-08 00:40:37.422622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:37.422632 | orchestrator | Wednesday 08 April 2026 00:40:36 +0000 (0:00:00.175) 0:00:31.689 ******* 2026-04-08 00:40:37.422643 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.422654 | orchestrator | 2026-04-08 00:40:37.422664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:37.422675 | orchestrator | Wednesday 08 April 2026 00:40:37 +0000 (0:00:00.212) 0:00:31.902 ******* 2026-04-08 00:40:37.422686 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.422697 | orchestrator | 2026-04-08 00:40:37.422708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:37.422719 | orchestrator | Wednesday 08 April 2026 00:40:37 +0000 (0:00:00.183) 0:00:32.085 ******* 2026-04-08 00:40:37.422729 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:37.422740 | orchestrator | 2026-04-08 00:40:37.422757 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-08 00:40:40.913825 | orchestrator | Wednesday 08 April 2026 00:40:37 +0000 (0:00:00.171) 0:00:32.257 ******* 2026-04-08 00:40:40.913915 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-08 00:40:40.913924 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-08 00:40:40.913931 | orchestrator | 2026-04-08 00:40:40.913939 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-08 00:40:40.913946 | orchestrator | Wednesday 08 April 2026 00:40:37 +0000 (0:00:00.161) 0:00:32.418 ******* 2026-04-08 00:40:40.913984 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:40.913991 | orchestrator | 2026-04-08 00:40:40.913998 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-08 00:40:40.914005 | orchestrator | Wednesday 08 April 2026 00:40:37 +0000 (0:00:00.120) 0:00:32.539 ******* 2026-04-08 00:40:40.914064 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:40.914073 | orchestrator | 2026-04-08 00:40:40.914080 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-08 00:40:40.914087 | orchestrator | Wednesday 08 April 2026 00:40:37 +0000 (0:00:00.122) 0:00:32.661 ******* 2026-04-08 00:40:40.914094 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:40.914101 | orchestrator | 2026-04-08 00:40:40.914108 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-08 00:40:40.914115 | orchestrator | Wednesday 08 April 2026 00:40:37 +0000 (0:00:00.129) 0:00:32.791 ******* 2026-04-08 00:40:40.914122 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:40.914129 | orchestrator | 2026-04-08 00:40:40.914161 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-08 00:40:40.914169 | orchestrator | Wednesday 08 April 2026 00:40:38 +0000 (0:00:00.234) 0:00:33.026 ******* 2026-04-08 00:40:40.914176 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd2a42094-2be0-50d9-ab62-bd2425088ba2'}}) 2026-04-08 00:40:40.914187 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed835e4d-3c58-59bb-af9d-6d23bfbc2494'}}) 2026-04-08 00:40:40.914195 | orchestrator | 2026-04-08 00:40:40.914202 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-08 00:40:40.914208 | orchestrator | Wednesday 08 April 2026 00:40:38 +0000 (0:00:00.160) 0:00:33.186 ******* 2026-04-08 00:40:40.914216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd2a42094-2be0-50d9-ab62-bd2425088ba2'}})  2026-04-08 00:40:40.914253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed835e4d-3c58-59bb-af9d-6d23bfbc2494'}})  2026-04-08 00:40:40.914260 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:40.914267 | orchestrator | 2026-04-08 00:40:40.914274 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-08 00:40:40.914280 | orchestrator | Wednesday 08 April 2026 00:40:38 +0000 (0:00:00.142) 0:00:33.329 ******* 2026-04-08 00:40:40.914287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd2a42094-2be0-50d9-ab62-bd2425088ba2'}})  2026-04-08 00:40:40.914294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed835e4d-3c58-59bb-af9d-6d23bfbc2494'}})  2026-04-08 00:40:40.914300 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:40.914307 | orchestrator | 2026-04-08 00:40:40.914314 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-08 00:40:40.914320 | orchestrator | Wednesday 08 April 2026 00:40:38 +0000 (0:00:00.134) 0:00:33.464 ******* 2026-04-08 00:40:40.914327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd2a42094-2be0-50d9-ab62-bd2425088ba2'}})  2026-04-08 00:40:40.914334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed835e4d-3c58-59bb-af9d-6d23bfbc2494'}})  2026-04-08 00:40:40.914340 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:40.914347 | orchestrator | 2026-04-08 00:40:40.914354 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-08 00:40:40.914360 | orchestrator | Wednesday 08 April 2026 00:40:38 +0000 (0:00:00.139) 0:00:33.604 ******* 2026-04-08 00:40:40.914367 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:40.914374 | orchestrator | 2026-04-08 00:40:40.914382 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-08 00:40:40.914389 | orchestrator | Wednesday 08 April 2026 00:40:38 +0000 (0:00:00.123) 0:00:33.727 ******* 2026-04-08 00:40:40.914396 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:40.914404 | orchestrator | 2026-04-08 00:40:40.914412 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-08 00:40:40.914420 | orchestrator | Wednesday 08 April 2026 00:40:39 +0000 (0:00:00.129) 0:00:33.856 ******* 2026-04-08 00:40:40.914427 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:40.914435 | orchestrator | 2026-04-08 00:40:40.914442 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-08 00:40:40.914450 | orchestrator | Wednesday 08 April 2026 00:40:39 +0000 (0:00:00.123) 0:00:33.980 ******* 2026-04-08 00:40:40.914458 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:40.914466 | orchestrator | 2026-04-08 00:40:40.914473 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-08 00:40:40.914481 | orchestrator | Wednesday 08 April 2026 00:40:39 +0000 (0:00:00.106) 0:00:34.087 ******* 2026-04-08 00:40:40.914489 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:40.914497 | orchestrator | 2026-04-08 00:40:40.914505 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-08 00:40:40.914512 | orchestrator | Wednesday 08 April 2026 00:40:39 +0000 (0:00:00.107) 0:00:34.194 ******* 2026-04-08 00:40:40.914521 | orchestrator | ok: [testbed-node-5] => { 2026-04-08 00:40:40.914529 | orchestrator |  "ceph_osd_devices": { 2026-04-08 00:40:40.914537 | orchestrator |  "sdb": { 2026-04-08 00:40:40.914557 | orchestrator |  "osd_lvm_uuid": "d2a42094-2be0-50d9-ab62-bd2425088ba2" 2026-04-08 00:40:40.914565 | orchestrator |  }, 2026-04-08 00:40:40.914572 | orchestrator |  "sdc": { 2026-04-08 00:40:40.914579 | orchestrator |  "osd_lvm_uuid": "ed835e4d-3c58-59bb-af9d-6d23bfbc2494" 2026-04-08 00:40:40.914585 | orchestrator |  } 2026-04-08 00:40:40.914592 | orchestrator |  } 2026-04-08 00:40:40.914599 | orchestrator | } 2026-04-08 00:40:40.914606 | orchestrator | 2026-04-08 00:40:40.914618 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-08 00:40:40.914624 | orchestrator | Wednesday 08 April 2026 00:40:39 +0000 (0:00:00.125) 0:00:34.320 ******* 2026-04-08 00:40:40.914631 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:40.914638 | orchestrator | 2026-04-08 00:40:40.914644 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-08 00:40:40.914651 | orchestrator | Wednesday 08 April 2026 00:40:39 +0000 (0:00:00.102) 0:00:34.422 ******* 2026-04-08 00:40:40.914657 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:40.914664 | orchestrator | 2026-04-08 00:40:40.914671 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-08 00:40:40.914677 | orchestrator | Wednesday 08 April 2026 00:40:39 +0000 (0:00:00.241) 0:00:34.664 ******* 2026-04-08 00:40:40.914684 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:40.914690 | orchestrator | 2026-04-08 00:40:40.914697 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-08 00:40:40.914704 | orchestrator | Wednesday 08 April 2026 00:40:39 +0000 (0:00:00.106) 0:00:34.770 ******* 2026-04-08 00:40:40.914710 | orchestrator | changed: [testbed-node-5] => { 2026-04-08 00:40:40.914717 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-08 00:40:40.914724 | orchestrator |  "ceph_osd_devices": { 2026-04-08 00:40:40.914731 | orchestrator |  "sdb": { 2026-04-08 00:40:40.914738 | orchestrator |  "osd_lvm_uuid": "d2a42094-2be0-50d9-ab62-bd2425088ba2" 2026-04-08 00:40:40.914745 | orchestrator |  }, 2026-04-08 00:40:40.914751 | orchestrator |  "sdc": { 2026-04-08 00:40:40.914758 | orchestrator |  "osd_lvm_uuid": "ed835e4d-3c58-59bb-af9d-6d23bfbc2494" 2026-04-08 00:40:40.914765 | orchestrator |  } 2026-04-08 00:40:40.914771 | orchestrator |  }, 2026-04-08 00:40:40.914778 | orchestrator |  "lvm_volumes": [ 2026-04-08 00:40:40.914785 | orchestrator |  { 2026-04-08 00:40:40.914792 | orchestrator |  "data": "osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2", 2026-04-08 00:40:40.914798 | orchestrator |  "data_vg": "ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2" 2026-04-08 00:40:40.914805 | orchestrator |  }, 2026-04-08 00:40:40.914815 | orchestrator |  { 2026-04-08 00:40:40.914822 | orchestrator |  "data": "osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494", 2026-04-08 00:40:40.914828 | orchestrator |  "data_vg": "ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494" 2026-04-08 00:40:40.914835 | orchestrator |  } 2026-04-08 00:40:40.914842 | orchestrator |  ] 2026-04-08 00:40:40.914848 | orchestrator |  } 2026-04-08 00:40:40.914855 | orchestrator | } 2026-04-08 00:40:40.914862 | orchestrator | 2026-04-08 00:40:40.914869 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-08 00:40:40.914875 | orchestrator | Wednesday 08 April 2026 00:40:40 +0000 (0:00:00.192) 0:00:34.963 ******* 2026-04-08 00:40:40.914882 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-08 00:40:40.914888 | orchestrator | 2026-04-08 00:40:40.914895 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:40:40.914902 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-08 00:40:40.914909 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-08 00:40:40.914916 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-08 00:40:40.914923 | orchestrator | 2026-04-08 00:40:40.914930 | orchestrator | 2026-04-08 00:40:40.914936 | orchestrator | 2026-04-08 00:40:40.914943 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:40:40.914992 | orchestrator | Wednesday 08 April 2026 00:40:40 +0000 (0:00:00.779) 0:00:35.743 ******* 2026-04-08 00:40:40.915005 | orchestrator | =============================================================================== 2026-04-08 00:40:40.915012 | orchestrator | Write configuration file ------------------------------------------------ 3.57s 2026-04-08 00:40:40.915019 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2026-04-08 00:40:40.915031 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2026-04-08 00:40:40.915037 | orchestrator | Get initial list of available block devices ----------------------------- 0.83s 2026-04-08 00:40:40.915044 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-04-08 00:40:40.915050 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2026-04-08 00:40:40.915057 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.67s 2026-04-08 00:40:40.915063 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2026-04-08 00:40:40.915070 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-04-08 00:40:40.915076 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2026-04-08 00:40:40.915083 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-04-08 00:40:40.915090 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2026-04-08 00:40:40.915096 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.55s 2026-04-08 00:40:40.915108 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-04-08 00:40:41.133770 | orchestrator | Print configuration data ------------------------------------------------ 0.54s 2026-04-08 00:40:41.133854 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2026-04-08 00:40:41.133863 | orchestrator | Set WAL devices config data --------------------------------------------- 0.49s 2026-04-08 00:40:41.133870 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.48s 2026-04-08 00:40:41.133877 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.46s 2026-04-08 00:40:41.133884 | orchestrator | Print DB devices -------------------------------------------------------- 0.46s 2026-04-08 00:41:02.963345 | orchestrator | 2026-04-08 00:41:02 | INFO  | Task 51d7bced-b1ed-40bd-8401-a1374fcd31a2 (sync inventory) is running in background. Output coming soon. 2026-04-08 00:41:31.230846 | orchestrator | 2026-04-08 00:41:04 | INFO  | Starting group_vars file reorganization 2026-04-08 00:41:31.231058 | orchestrator | 2026-04-08 00:41:04 | INFO  | Moved 0 file(s) to their respective directories 2026-04-08 00:41:31.231088 | orchestrator | 2026-04-08 00:41:04 | INFO  | Group_vars file reorganization completed 2026-04-08 00:41:31.231107 | orchestrator | 2026-04-08 00:41:07 | INFO  | Starting variable preparation from inventory 2026-04-08 00:41:31.231125 | orchestrator | 2026-04-08 00:41:10 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-08 00:41:31.231143 | orchestrator | 2026-04-08 00:41:10 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-08 00:41:31.231185 | orchestrator | 2026-04-08 00:41:10 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-08 00:41:31.231204 | orchestrator | 2026-04-08 00:41:10 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-08 00:41:31.231221 | orchestrator | 2026-04-08 00:41:10 | INFO  | Variable preparation completed 2026-04-08 00:41:31.231238 | orchestrator | 2026-04-08 00:41:11 | INFO  | Starting inventory overwrite handling 2026-04-08 00:41:31.231256 | orchestrator | 2026-04-08 00:41:11 | INFO  | Handling group overwrites in 99-overwrite 2026-04-08 00:41:31.231275 | orchestrator | 2026-04-08 00:41:11 | INFO  | Removing group frr:children from 60-generic 2026-04-08 00:41:31.231327 | orchestrator | 2026-04-08 00:41:11 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-08 00:41:31.231348 | orchestrator | 2026-04-08 00:41:11 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-08 00:41:31.231366 | orchestrator | 2026-04-08 00:41:11 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-08 00:41:31.231384 | orchestrator | 2026-04-08 00:41:11 | INFO  | Handling group overwrites in 20-roles 2026-04-08 00:41:31.231402 | orchestrator | 2026-04-08 00:41:11 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-08 00:41:31.231418 | orchestrator | 2026-04-08 00:41:11 | INFO  | Removed 5 group(s) in total 2026-04-08 00:41:31.231436 | orchestrator | 2026-04-08 00:41:11 | INFO  | Inventory overwrite handling completed 2026-04-08 00:41:31.231454 | orchestrator | 2026-04-08 00:41:12 | INFO  | Starting merge of inventory files 2026-04-08 00:41:31.231470 | orchestrator | 2026-04-08 00:41:12 | INFO  | Inventory files merged successfully 2026-04-08 00:41:31.231488 | orchestrator | 2026-04-08 00:41:16 | INFO  | Generating minified hosts file 2026-04-08 00:41:31.231506 | orchestrator | 2026-04-08 00:41:18 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-08 00:41:31.231525 | orchestrator | 2026-04-08 00:41:18 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-08 00:41:31.231544 | orchestrator | 2026-04-08 00:41:19 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-08 00:41:31.231562 | orchestrator | 2026-04-08 00:41:29 | INFO  | Successfully wrote ClusterShell configuration 2026-04-08 00:41:31.231579 | orchestrator | [master 33d3aa0] 2026-04-08-00-41 2026-04-08 00:41:31.231598 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-08 00:41:31.231616 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-08 00:41:31.231633 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-08 00:41:31.231650 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-08 00:41:32.443601 | orchestrator | 2026-04-08 00:41:32 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-08 00:41:32.499193 | orchestrator | 2026-04-08 00:41:32 | INFO  | Task 42e7078e-214e-4a94-9c57-900e423a4906 (ceph-create-lvm-devices) was prepared for execution. 2026-04-08 00:41:32.499270 | orchestrator | 2026-04-08 00:41:32 | INFO  | It takes a moment until task 42e7078e-214e-4a94-9c57-900e423a4906 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-08 00:41:42.905818 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-08 00:41:42.905974 | orchestrator | 2.16.14 2026-04-08 00:41:42.905988 | orchestrator | 2026-04-08 00:41:42.905996 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-08 00:41:42.906004 | orchestrator | 2026-04-08 00:41:42.906010 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-08 00:41:42.906048 | orchestrator | Wednesday 08 April 2026 00:41:36 +0000 (0:00:00.245) 0:00:00.245 ******* 2026-04-08 00:41:42.906056 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-08 00:41:42.906063 | orchestrator | 2026-04-08 00:41:42.906070 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-08 00:41:42.906077 | orchestrator | Wednesday 08 April 2026 00:41:36 +0000 (0:00:00.208) 0:00:00.454 ******* 2026-04-08 00:41:42.906084 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:41:42.906091 | orchestrator | 2026-04-08 00:41:42.906098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:41:42.906105 | orchestrator | Wednesday 08 April 2026 00:41:37 +0000 (0:00:00.190) 0:00:00.645 ******* 2026-04-08 00:41:42.906133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-08 00:41:42.906141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-08 00:41:42.906147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-08 00:41:42.906154 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-08 00:41:42.906161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-08 00:41:42.906168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-08 00:41:42.906175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-08 00:41:42.906182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-08 00:41:42.906189 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-08 00:41:42.906195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-08 00:41:42.906201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-08 00:41:42.906208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-08 00:41:42.906214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-08 00:41:42.906222 | orchestrator | 2026-04-08 00:41:42.906231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:41:42.906241 | orchestrator | Wednesday 08 April 2026 00:41:37 +0000 (0:00:00.366) 0:00:01.011 ******* 2026-04-08 00:41:42.906247 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:42.906254 | orchestrator | 2026-04-08 00:41:42.906261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:41:42.906268 | orchestrator | Wednesday 08 April 2026 00:41:37 +0000 (0:00:00.415) 0:00:01.427 ******* 2026-04-08 00:41:42.906275 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:42.906281 | orchestrator | 2026-04-08 00:41:42.906287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:41:42.906293 | orchestrator | Wednesday 08 April 2026 00:41:37 +0000 (0:00:00.161) 0:00:01.589 ******* 2026-04-08 00:41:42.906314 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:42.906322 | orchestrator | 2026-04-08 00:41:42.906328 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:41:42.906335 | orchestrator | Wednesday 08 April 2026 00:41:38 +0000 (0:00:00.155) 0:00:01.744 ******* 2026-04-08 00:41:42.906341 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:42.906348 | orchestrator | 2026-04-08 00:41:42.906354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:41:42.906360 | orchestrator | Wednesday 08 April 2026 00:41:38 +0000 (0:00:00.160) 0:00:01.905 ******* 2026-04-08 00:41:42.906370 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:42.906379 | orchestrator | 2026-04-08 00:41:42.906386 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:41:42.906392 | orchestrator | Wednesday 08 April 2026 00:41:38 +0000 (0:00:00.158) 0:00:02.064 ******* 2026-04-08 00:41:42.906398 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:42.906404 | orchestrator | 2026-04-08 00:41:42.906410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:41:42.906417 | orchestrator | Wednesday 08 April 2026 00:41:38 +0000 (0:00:00.161) 0:00:02.225 ******* 2026-04-08 00:41:42.906424 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:42.906431 | orchestrator | 2026-04-08 00:41:42.906437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:41:42.906444 | orchestrator | Wednesday 08 April 2026 00:41:38 +0000 (0:00:00.157) 0:00:02.382 ******* 2026-04-08 00:41:42.906450 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:42.906465 | orchestrator | 2026-04-08 00:41:42.906472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:41:42.906478 | orchestrator | Wednesday 08 April 2026 00:41:38 +0000 (0:00:00.157) 0:00:02.540 ******* 2026-04-08 00:41:42.906485 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d) 2026-04-08 00:41:42.906492 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d) 2026-04-08 00:41:42.906498 | orchestrator | 2026-04-08 00:41:42.906505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:41:42.906526 | orchestrator | Wednesday 08 April 2026 00:41:39 +0000 (0:00:00.366) 0:00:02.906 ******* 2026-04-08 00:41:42.906533 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d0f6de66-4fec-4fd7-97e2-1741dd54f232) 2026-04-08 00:41:42.906539 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d0f6de66-4fec-4fd7-97e2-1741dd54f232) 2026-04-08 00:41:42.906545 | orchestrator | 2026-04-08 00:41:42.906552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:41:42.906558 | orchestrator | Wednesday 08 April 2026 00:41:39 +0000 (0:00:00.352) 0:00:03.259 ******* 2026-04-08 00:41:42.906564 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7b23824a-491e-4dc1-9823-22fa2ac48d76) 2026-04-08 00:41:42.906571 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7b23824a-491e-4dc1-9823-22fa2ac48d76) 2026-04-08 00:41:42.906577 | orchestrator | 2026-04-08 00:41:42.906584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:41:42.906589 | orchestrator | Wednesday 08 April 2026 00:41:40 +0000 (0:00:00.478) 0:00:03.737 ******* 2026-04-08 00:41:42.906595 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a8171b98-d766-41eb-84f8-e0c6f3fec117) 2026-04-08 00:41:42.906602 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a8171b98-d766-41eb-84f8-e0c6f3fec117) 2026-04-08 00:41:42.906608 | orchestrator | 2026-04-08 00:41:42.906615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:41:42.906621 | orchestrator | Wednesday 08 April 2026 00:41:40 +0000 (0:00:00.529) 0:00:04.267 ******* 2026-04-08 00:41:42.906626 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-08 00:41:42.906633 | orchestrator | 2026-04-08 00:41:42.906640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:41:42.906652 | orchestrator | Wednesday 08 April 2026 00:41:41 +0000 (0:00:00.572) 0:00:04.839 ******* 2026-04-08 00:41:42.906658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-08 00:41:42.906665 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-08 00:41:42.906671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-08 00:41:42.906678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-08 00:41:42.906684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-08 00:41:42.906691 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-08 00:41:42.906697 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-08 00:41:42.906703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-08 00:41:42.906709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-08 00:41:42.906715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-08 00:41:42.906722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-08 00:41:42.906727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-08 00:41:42.906741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-08 00:41:42.906748 | orchestrator | 2026-04-08 00:41:42.906754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:41:42.906760 | orchestrator | Wednesday 08 April 2026 00:41:41 +0000 (0:00:00.405) 0:00:05.245 ******* 2026-04-08 00:41:42.906766 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:42.906772 | orchestrator | 2026-04-08 00:41:42.906779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:41:42.906785 | orchestrator | Wednesday 08 April 2026 00:41:41 +0000 (0:00:00.178) 0:00:05.423 ******* 2026-04-08 00:41:42.906791 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:42.906798 | orchestrator | 2026-04-08 00:41:42.906805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:41:42.906811 | orchestrator | Wednesday 08 April 2026 00:41:41 +0000 (0:00:00.178) 0:00:05.601 ******* 2026-04-08 00:41:42.906817 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:42.906824 | orchestrator | 2026-04-08 00:41:42.906830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:41:42.906836 | orchestrator | Wednesday 08 April 2026 00:41:42 +0000 (0:00:00.189) 0:00:05.791 ******* 2026-04-08 00:41:42.906843 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:42.906849 | orchestrator | 2026-04-08 00:41:42.906855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:41:42.906862 | orchestrator | Wednesday 08 April 2026 00:41:42 +0000 (0:00:00.195) 0:00:05.986 ******* 2026-04-08 00:41:42.906868 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:42.906875 | orchestrator | 2026-04-08 00:41:42.906881 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:41:42.906907 | orchestrator | Wednesday 08 April 2026 00:41:42 +0000 (0:00:00.190) 0:00:06.177 ******* 2026-04-08 00:41:42.906913 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:42.906919 | orchestrator | 2026-04-08 00:41:42.906924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:41:42.906930 | orchestrator | Wednesday 08 April 2026 00:41:42 +0000 (0:00:00.181) 0:00:06.358 ******* 2026-04-08 00:41:42.906936 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:42.906941 | orchestrator | 2026-04-08 00:41:42.906956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:41:50.331467 | orchestrator | Wednesday 08 April 2026 00:41:42 +0000 (0:00:00.167) 0:00:06.525 ******* 2026-04-08 00:41:50.331541 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.331548 | orchestrator | 2026-04-08 00:41:50.331554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:41:50.331559 | orchestrator | Wednesday 08 April 2026 00:41:43 +0000 (0:00:00.164) 0:00:06.690 ******* 2026-04-08 00:41:50.331564 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-08 00:41:50.331569 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-08 00:41:50.331574 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-08 00:41:50.331578 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-08 00:41:50.331583 | orchestrator | 2026-04-08 00:41:50.331587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:41:50.331592 | orchestrator | Wednesday 08 April 2026 00:41:43 +0000 (0:00:00.902) 0:00:07.592 ******* 2026-04-08 00:41:50.331596 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.331600 | orchestrator | 2026-04-08 00:41:50.331604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:41:50.331609 | orchestrator | Wednesday 08 April 2026 00:41:44 +0000 (0:00:00.179) 0:00:07.771 ******* 2026-04-08 00:41:50.331613 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.331617 | orchestrator | 2026-04-08 00:41:50.331622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:41:50.331643 | orchestrator | Wednesday 08 April 2026 00:41:44 +0000 (0:00:00.188) 0:00:07.960 ******* 2026-04-08 00:41:50.331647 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.331652 | orchestrator | 2026-04-08 00:41:50.331656 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:41:50.331661 | orchestrator | Wednesday 08 April 2026 00:41:44 +0000 (0:00:00.186) 0:00:08.147 ******* 2026-04-08 00:41:50.331665 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.331670 | orchestrator | 2026-04-08 00:41:50.331674 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-08 00:41:50.331679 | orchestrator | Wednesday 08 April 2026 00:41:44 +0000 (0:00:00.190) 0:00:08.337 ******* 2026-04-08 00:41:50.331683 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.331687 | orchestrator | 2026-04-08 00:41:50.331692 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-08 00:41:50.331698 | orchestrator | Wednesday 08 April 2026 00:41:44 +0000 (0:00:00.122) 0:00:08.460 ******* 2026-04-08 00:41:50.331705 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bf49c8a6-5f7f-52ec-8321-922f51127285'}}) 2026-04-08 00:41:50.331712 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '42db71c5-e51d-540c-8fbe-0cd4e432c3d3'}}) 2026-04-08 00:41:50.331718 | orchestrator | 2026-04-08 00:41:50.331725 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-08 00:41:50.331731 | orchestrator | Wednesday 08 April 2026 00:41:45 +0000 (0:00:00.170) 0:00:08.630 ******* 2026-04-08 00:41:50.331739 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'}) 2026-04-08 00:41:50.331747 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'}) 2026-04-08 00:41:50.331753 | orchestrator | 2026-04-08 00:41:50.331760 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-08 00:41:50.331766 | orchestrator | Wednesday 08 April 2026 00:41:46 +0000 (0:00:01.907) 0:00:10.538 ******* 2026-04-08 00:41:50.331773 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:41:50.331794 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:41:50.331801 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.331808 | orchestrator | 2026-04-08 00:41:50.331814 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-08 00:41:50.331821 | orchestrator | Wednesday 08 April 2026 00:41:47 +0000 (0:00:00.130) 0:00:10.669 ******* 2026-04-08 00:41:50.331827 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'}) 2026-04-08 00:41:50.331833 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'}) 2026-04-08 00:41:50.331840 | orchestrator | 2026-04-08 00:41:50.331846 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-08 00:41:50.331853 | orchestrator | Wednesday 08 April 2026 00:41:48 +0000 (0:00:01.380) 0:00:12.049 ******* 2026-04-08 00:41:50.331859 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:41:50.331866 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:41:50.331872 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.331925 | orchestrator | 2026-04-08 00:41:50.331931 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-08 00:41:50.331943 | orchestrator | Wednesday 08 April 2026 00:41:48 +0000 (0:00:00.139) 0:00:12.188 ******* 2026-04-08 00:41:50.331962 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.331969 | orchestrator | 2026-04-08 00:41:50.331975 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-08 00:41:50.331982 | orchestrator | Wednesday 08 April 2026 00:41:48 +0000 (0:00:00.134) 0:00:12.323 ******* 2026-04-08 00:41:50.331988 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:41:50.331994 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:41:50.332000 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.332007 | orchestrator | 2026-04-08 00:41:50.332013 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-08 00:41:50.332019 | orchestrator | Wednesday 08 April 2026 00:41:48 +0000 (0:00:00.281) 0:00:12.605 ******* 2026-04-08 00:41:50.332025 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.332031 | orchestrator | 2026-04-08 00:41:50.332037 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-08 00:41:50.332043 | orchestrator | Wednesday 08 April 2026 00:41:49 +0000 (0:00:00.121) 0:00:12.726 ******* 2026-04-08 00:41:50.332050 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:41:50.332056 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:41:50.332062 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.332068 | orchestrator | 2026-04-08 00:41:50.332079 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-08 00:41:50.332086 | orchestrator | Wednesday 08 April 2026 00:41:49 +0000 (0:00:00.140) 0:00:12.866 ******* 2026-04-08 00:41:50.332092 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.332098 | orchestrator | 2026-04-08 00:41:50.332104 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-08 00:41:50.332110 | orchestrator | Wednesday 08 April 2026 00:41:49 +0000 (0:00:00.152) 0:00:13.019 ******* 2026-04-08 00:41:50.332116 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:41:50.332122 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:41:50.332128 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.332134 | orchestrator | 2026-04-08 00:41:50.332141 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-08 00:41:50.332147 | orchestrator | Wednesday 08 April 2026 00:41:49 +0000 (0:00:00.189) 0:00:13.208 ******* 2026-04-08 00:41:50.332153 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:41:50.332159 | orchestrator | 2026-04-08 00:41:50.332165 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-08 00:41:50.332171 | orchestrator | Wednesday 08 April 2026 00:41:49 +0000 (0:00:00.140) 0:00:13.349 ******* 2026-04-08 00:41:50.332178 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:41:50.332184 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:41:50.332190 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.332196 | orchestrator | 2026-04-08 00:41:50.332202 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-08 00:41:50.332213 | orchestrator | Wednesday 08 April 2026 00:41:49 +0000 (0:00:00.160) 0:00:13.510 ******* 2026-04-08 00:41:50.332219 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:41:50.332225 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:41:50.332231 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.332237 | orchestrator | 2026-04-08 00:41:50.332243 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-08 00:41:50.332250 | orchestrator | Wednesday 08 April 2026 00:41:50 +0000 (0:00:00.145) 0:00:13.655 ******* 2026-04-08 00:41:50.332256 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:41:50.332262 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:41:50.332268 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.332274 | orchestrator | 2026-04-08 00:41:50.332280 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-08 00:41:50.332286 | orchestrator | Wednesday 08 April 2026 00:41:50 +0000 (0:00:00.153) 0:00:13.808 ******* 2026-04-08 00:41:50.332292 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:50.332298 | orchestrator | 2026-04-08 00:41:50.332304 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-08 00:41:50.332315 | orchestrator | Wednesday 08 April 2026 00:41:50 +0000 (0:00:00.143) 0:00:13.952 ******* 2026-04-08 00:41:56.405777 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.405927 | orchestrator | 2026-04-08 00:41:56.405938 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-08 00:41:56.405950 | orchestrator | Wednesday 08 April 2026 00:41:50 +0000 (0:00:00.167) 0:00:14.120 ******* 2026-04-08 00:41:56.405959 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.405969 | orchestrator | 2026-04-08 00:41:56.405979 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-08 00:41:56.405990 | orchestrator | Wednesday 08 April 2026 00:41:50 +0000 (0:00:00.132) 0:00:14.252 ******* 2026-04-08 00:41:56.405999 | orchestrator | ok: [testbed-node-3] => { 2026-04-08 00:41:56.406012 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-08 00:41:56.406075 | orchestrator | } 2026-04-08 00:41:56.406082 | orchestrator | 2026-04-08 00:41:56.406087 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-08 00:41:56.406093 | orchestrator | Wednesday 08 April 2026 00:41:50 +0000 (0:00:00.265) 0:00:14.518 ******* 2026-04-08 00:41:56.406099 | orchestrator | ok: [testbed-node-3] => { 2026-04-08 00:41:56.406105 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-08 00:41:56.406111 | orchestrator | } 2026-04-08 00:41:56.406117 | orchestrator | 2026-04-08 00:41:56.406122 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-08 00:41:56.406128 | orchestrator | Wednesday 08 April 2026 00:41:51 +0000 (0:00:00.144) 0:00:14.663 ******* 2026-04-08 00:41:56.406134 | orchestrator | ok: [testbed-node-3] => { 2026-04-08 00:41:56.406140 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-08 00:41:56.406146 | orchestrator | } 2026-04-08 00:41:56.406151 | orchestrator | 2026-04-08 00:41:56.406157 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-08 00:41:56.406163 | orchestrator | Wednesday 08 April 2026 00:41:51 +0000 (0:00:00.145) 0:00:14.809 ******* 2026-04-08 00:41:56.406168 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:41:56.406174 | orchestrator | 2026-04-08 00:41:56.406180 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-08 00:41:56.406186 | orchestrator | Wednesday 08 April 2026 00:41:51 +0000 (0:00:00.686) 0:00:15.496 ******* 2026-04-08 00:41:56.406215 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:41:56.406221 | orchestrator | 2026-04-08 00:41:56.406227 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-08 00:41:56.406232 | orchestrator | Wednesday 08 April 2026 00:41:52 +0000 (0:00:00.505) 0:00:16.001 ******* 2026-04-08 00:41:56.406238 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:41:56.406244 | orchestrator | 2026-04-08 00:41:56.406250 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-08 00:41:56.406255 | orchestrator | Wednesday 08 April 2026 00:41:52 +0000 (0:00:00.518) 0:00:16.519 ******* 2026-04-08 00:41:56.406261 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:41:56.406266 | orchestrator | 2026-04-08 00:41:56.406272 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-08 00:41:56.406278 | orchestrator | Wednesday 08 April 2026 00:41:53 +0000 (0:00:00.126) 0:00:16.645 ******* 2026-04-08 00:41:56.406283 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406289 | orchestrator | 2026-04-08 00:41:56.406294 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-08 00:41:56.406300 | orchestrator | Wednesday 08 April 2026 00:41:53 +0000 (0:00:00.088) 0:00:16.733 ******* 2026-04-08 00:41:56.406307 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406313 | orchestrator | 2026-04-08 00:41:56.406320 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-08 00:41:56.406326 | orchestrator | Wednesday 08 April 2026 00:41:53 +0000 (0:00:00.109) 0:00:16.842 ******* 2026-04-08 00:41:56.406332 | orchestrator | ok: [testbed-node-3] => { 2026-04-08 00:41:56.406338 | orchestrator |  "vgs_report": { 2026-04-08 00:41:56.406345 | orchestrator |  "vg": [] 2026-04-08 00:41:56.406352 | orchestrator |  } 2026-04-08 00:41:56.406359 | orchestrator | } 2026-04-08 00:41:56.406365 | orchestrator | 2026-04-08 00:41:56.406371 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-08 00:41:56.406378 | orchestrator | Wednesday 08 April 2026 00:41:53 +0000 (0:00:00.152) 0:00:16.994 ******* 2026-04-08 00:41:56.406384 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406390 | orchestrator | 2026-04-08 00:41:56.406397 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-08 00:41:56.406403 | orchestrator | Wednesday 08 April 2026 00:41:53 +0000 (0:00:00.151) 0:00:17.146 ******* 2026-04-08 00:41:56.406410 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406416 | orchestrator | 2026-04-08 00:41:56.406422 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-08 00:41:56.406428 | orchestrator | Wednesday 08 April 2026 00:41:53 +0000 (0:00:00.148) 0:00:17.295 ******* 2026-04-08 00:41:56.406433 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406441 | orchestrator | 2026-04-08 00:41:56.406450 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-08 00:41:56.406457 | orchestrator | Wednesday 08 April 2026 00:41:53 +0000 (0:00:00.326) 0:00:17.621 ******* 2026-04-08 00:41:56.406471 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406481 | orchestrator | 2026-04-08 00:41:56.406489 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-08 00:41:56.406498 | orchestrator | Wednesday 08 April 2026 00:41:54 +0000 (0:00:00.126) 0:00:17.747 ******* 2026-04-08 00:41:56.406506 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406514 | orchestrator | 2026-04-08 00:41:56.406522 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-08 00:41:56.406531 | orchestrator | Wednesday 08 April 2026 00:41:54 +0000 (0:00:00.115) 0:00:17.863 ******* 2026-04-08 00:41:56.406539 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406548 | orchestrator | 2026-04-08 00:41:56.406556 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-08 00:41:56.406565 | orchestrator | Wednesday 08 April 2026 00:41:54 +0000 (0:00:00.155) 0:00:18.018 ******* 2026-04-08 00:41:56.406574 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406592 | orchestrator | 2026-04-08 00:41:56.406601 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-08 00:41:56.406610 | orchestrator | Wednesday 08 April 2026 00:41:54 +0000 (0:00:00.132) 0:00:18.151 ******* 2026-04-08 00:41:56.406633 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406639 | orchestrator | 2026-04-08 00:41:56.406662 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-08 00:41:56.406668 | orchestrator | Wednesday 08 April 2026 00:41:54 +0000 (0:00:00.104) 0:00:18.255 ******* 2026-04-08 00:41:56.406673 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406679 | orchestrator | 2026-04-08 00:41:56.406684 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-08 00:41:56.406690 | orchestrator | Wednesday 08 April 2026 00:41:54 +0000 (0:00:00.101) 0:00:18.357 ******* 2026-04-08 00:41:56.406695 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406701 | orchestrator | 2026-04-08 00:41:56.406706 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-08 00:41:56.406712 | orchestrator | Wednesday 08 April 2026 00:41:54 +0000 (0:00:00.129) 0:00:18.486 ******* 2026-04-08 00:41:56.406717 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406723 | orchestrator | 2026-04-08 00:41:56.406728 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-08 00:41:56.406734 | orchestrator | Wednesday 08 April 2026 00:41:54 +0000 (0:00:00.118) 0:00:18.604 ******* 2026-04-08 00:41:56.406739 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406744 | orchestrator | 2026-04-08 00:41:56.406750 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-08 00:41:56.406755 | orchestrator | Wednesday 08 April 2026 00:41:55 +0000 (0:00:00.107) 0:00:18.712 ******* 2026-04-08 00:41:56.406761 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406766 | orchestrator | 2026-04-08 00:41:56.406772 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-08 00:41:56.406777 | orchestrator | Wednesday 08 April 2026 00:41:55 +0000 (0:00:00.139) 0:00:18.851 ******* 2026-04-08 00:41:56.406782 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406788 | orchestrator | 2026-04-08 00:41:56.406797 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-08 00:41:56.406803 | orchestrator | Wednesday 08 April 2026 00:41:55 +0000 (0:00:00.107) 0:00:18.959 ******* 2026-04-08 00:41:56.406810 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:41:56.406818 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:41:56.406823 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406829 | orchestrator | 2026-04-08 00:41:56.406834 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-08 00:41:56.406840 | orchestrator | Wednesday 08 April 2026 00:41:55 +0000 (0:00:00.207) 0:00:19.166 ******* 2026-04-08 00:41:56.406845 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:41:56.406851 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:41:56.406856 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406862 | orchestrator | 2026-04-08 00:41:56.406867 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-08 00:41:56.406935 | orchestrator | Wednesday 08 April 2026 00:41:55 +0000 (0:00:00.356) 0:00:19.523 ******* 2026-04-08 00:41:56.406941 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:41:56.406947 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:41:56.406958 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406963 | orchestrator | 2026-04-08 00:41:56.406969 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-08 00:41:56.406974 | orchestrator | Wednesday 08 April 2026 00:41:56 +0000 (0:00:00.160) 0:00:19.684 ******* 2026-04-08 00:41:56.406979 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:41:56.406985 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:41:56.406990 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.406996 | orchestrator | 2026-04-08 00:41:56.407001 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-08 00:41:56.407007 | orchestrator | Wednesday 08 April 2026 00:41:56 +0000 (0:00:00.135) 0:00:19.820 ******* 2026-04-08 00:41:56.407014 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:41:56.407020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:41:56.407026 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:41:56.407033 | orchestrator | 2026-04-08 00:41:56.407039 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-08 00:41:56.407045 | orchestrator | Wednesday 08 April 2026 00:41:56 +0000 (0:00:00.144) 0:00:19.964 ******* 2026-04-08 00:41:56.407057 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:42:01.623827 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:42:01.624046 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:42:01.624076 | orchestrator | 2026-04-08 00:42:01.624097 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-08 00:42:01.624114 | orchestrator | Wednesday 08 April 2026 00:41:56 +0000 (0:00:00.150) 0:00:20.114 ******* 2026-04-08 00:42:01.624126 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:42:01.624137 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:42:01.624149 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:42:01.624159 | orchestrator | 2026-04-08 00:42:01.624170 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-08 00:42:01.624181 | orchestrator | Wednesday 08 April 2026 00:41:56 +0000 (0:00:00.148) 0:00:20.264 ******* 2026-04-08 00:42:01.624192 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:42:01.624227 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:42:01.624239 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:42:01.624249 | orchestrator | 2026-04-08 00:42:01.624260 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-08 00:42:01.624271 | orchestrator | Wednesday 08 April 2026 00:41:56 +0000 (0:00:00.131) 0:00:20.395 ******* 2026-04-08 00:42:01.624282 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:42:01.624296 | orchestrator | 2026-04-08 00:42:01.624341 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-08 00:42:01.624355 | orchestrator | Wednesday 08 April 2026 00:41:57 +0000 (0:00:00.491) 0:00:20.886 ******* 2026-04-08 00:42:01.624367 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:42:01.624380 | orchestrator | 2026-04-08 00:42:01.624393 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-08 00:42:01.624412 | orchestrator | Wednesday 08 April 2026 00:41:57 +0000 (0:00:00.529) 0:00:21.416 ******* 2026-04-08 00:42:01.624441 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:42:01.624461 | orchestrator | 2026-04-08 00:42:01.624480 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-08 00:42:01.624498 | orchestrator | Wednesday 08 April 2026 00:41:57 +0000 (0:00:00.164) 0:00:21.581 ******* 2026-04-08 00:42:01.624516 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'vg_name': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'}) 2026-04-08 00:42:01.624537 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'vg_name': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'}) 2026-04-08 00:42:01.624554 | orchestrator | 2026-04-08 00:42:01.624574 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-08 00:42:01.624592 | orchestrator | Wednesday 08 April 2026 00:41:58 +0000 (0:00:00.163) 0:00:21.744 ******* 2026-04-08 00:42:01.624609 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:42:01.624627 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:42:01.624645 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:42:01.624664 | orchestrator | 2026-04-08 00:42:01.624683 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-08 00:42:01.624702 | orchestrator | Wednesday 08 April 2026 00:41:58 +0000 (0:00:00.145) 0:00:21.890 ******* 2026-04-08 00:42:01.624719 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:42:01.624736 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:42:01.624752 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:42:01.624770 | orchestrator | 2026-04-08 00:42:01.624789 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-08 00:42:01.624808 | orchestrator | Wednesday 08 April 2026 00:41:58 +0000 (0:00:00.327) 0:00:22.218 ******* 2026-04-08 00:42:01.624827 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'})  2026-04-08 00:42:01.624845 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'})  2026-04-08 00:42:01.624893 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:42:01.624916 | orchestrator | 2026-04-08 00:42:01.624935 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-08 00:42:01.624955 | orchestrator | Wednesday 08 April 2026 00:41:58 +0000 (0:00:00.163) 0:00:22.381 ******* 2026-04-08 00:42:01.625005 | orchestrator | ok: [testbed-node-3] => { 2026-04-08 00:42:01.625027 | orchestrator |  "lvm_report": { 2026-04-08 00:42:01.625048 | orchestrator |  "lv": [ 2026-04-08 00:42:01.625069 | orchestrator |  { 2026-04-08 00:42:01.625090 | orchestrator |  "lv_name": "osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3", 2026-04-08 00:42:01.625110 | orchestrator |  "vg_name": "ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3" 2026-04-08 00:42:01.625129 | orchestrator |  }, 2026-04-08 00:42:01.625166 | orchestrator |  { 2026-04-08 00:42:01.625186 | orchestrator |  "lv_name": "osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285", 2026-04-08 00:42:01.625206 | orchestrator |  "vg_name": "ceph-bf49c8a6-5f7f-52ec-8321-922f51127285" 2026-04-08 00:42:01.625226 | orchestrator |  } 2026-04-08 00:42:01.625247 | orchestrator |  ], 2026-04-08 00:42:01.625266 | orchestrator |  "pv": [ 2026-04-08 00:42:01.625286 | orchestrator |  { 2026-04-08 00:42:01.625306 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-08 00:42:01.625326 | orchestrator |  "vg_name": "ceph-bf49c8a6-5f7f-52ec-8321-922f51127285" 2026-04-08 00:42:01.625347 | orchestrator |  }, 2026-04-08 00:42:01.625367 | orchestrator |  { 2026-04-08 00:42:01.625386 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-08 00:42:01.625406 | orchestrator |  "vg_name": "ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3" 2026-04-08 00:42:01.625426 | orchestrator |  } 2026-04-08 00:42:01.625445 | orchestrator |  ] 2026-04-08 00:42:01.625466 | orchestrator |  } 2026-04-08 00:42:01.625486 | orchestrator | } 2026-04-08 00:42:01.625507 | orchestrator | 2026-04-08 00:42:01.625528 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-08 00:42:01.625547 | orchestrator | 2026-04-08 00:42:01.625566 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-08 00:42:01.625587 | orchestrator | Wednesday 08 April 2026 00:41:59 +0000 (0:00:00.248) 0:00:22.629 ******* 2026-04-08 00:42:01.625607 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-08 00:42:01.625626 | orchestrator | 2026-04-08 00:42:01.625645 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-08 00:42:01.625663 | orchestrator | Wednesday 08 April 2026 00:41:59 +0000 (0:00:00.253) 0:00:22.883 ******* 2026-04-08 00:42:01.625684 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:42:01.625703 | orchestrator | 2026-04-08 00:42:01.625722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:01.625741 | orchestrator | Wednesday 08 April 2026 00:41:59 +0000 (0:00:00.246) 0:00:23.130 ******* 2026-04-08 00:42:01.625760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-08 00:42:01.625778 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-08 00:42:01.625797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-08 00:42:01.625817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-08 00:42:01.625834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-08 00:42:01.625851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-08 00:42:01.625908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-08 00:42:01.625927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-08 00:42:01.625946 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-08 00:42:01.625982 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-08 00:42:01.626002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-08 00:42:01.626116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-08 00:42:01.626143 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-08 00:42:01.626162 | orchestrator | 2026-04-08 00:42:01.626182 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:01.626201 | orchestrator | Wednesday 08 April 2026 00:41:59 +0000 (0:00:00.428) 0:00:23.558 ******* 2026-04-08 00:42:01.626220 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:01.626257 | orchestrator | 2026-04-08 00:42:01.626277 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:01.626295 | orchestrator | Wednesday 08 April 2026 00:42:00 +0000 (0:00:00.188) 0:00:23.747 ******* 2026-04-08 00:42:01.626312 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:01.626330 | orchestrator | 2026-04-08 00:42:01.626348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:01.626367 | orchestrator | Wednesday 08 April 2026 00:42:00 +0000 (0:00:00.247) 0:00:23.994 ******* 2026-04-08 00:42:01.626386 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:01.626405 | orchestrator | 2026-04-08 00:42:01.626424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:01.626444 | orchestrator | Wednesday 08 April 2026 00:42:00 +0000 (0:00:00.186) 0:00:24.180 ******* 2026-04-08 00:42:01.626463 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:01.626481 | orchestrator | 2026-04-08 00:42:01.626532 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:01.626550 | orchestrator | Wednesday 08 April 2026 00:42:01 +0000 (0:00:00.701) 0:00:24.882 ******* 2026-04-08 00:42:01.626568 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:01.626586 | orchestrator | 2026-04-08 00:42:01.626604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:01.626621 | orchestrator | Wednesday 08 April 2026 00:42:01 +0000 (0:00:00.181) 0:00:25.063 ******* 2026-04-08 00:42:01.626638 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:01.626654 | orchestrator | 2026-04-08 00:42:01.626694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:11.487844 | orchestrator | Wednesday 08 April 2026 00:42:01 +0000 (0:00:00.180) 0:00:25.244 ******* 2026-04-08 00:42:11.488018 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.488034 | orchestrator | 2026-04-08 00:42:11.488045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:11.488056 | orchestrator | Wednesday 08 April 2026 00:42:01 +0000 (0:00:00.170) 0:00:25.414 ******* 2026-04-08 00:42:11.488066 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.488076 | orchestrator | 2026-04-08 00:42:11.488086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:11.488096 | orchestrator | Wednesday 08 April 2026 00:42:01 +0000 (0:00:00.155) 0:00:25.570 ******* 2026-04-08 00:42:11.488107 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda) 2026-04-08 00:42:11.488120 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda) 2026-04-08 00:42:11.488130 | orchestrator | 2026-04-08 00:42:11.488140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:11.488154 | orchestrator | Wednesday 08 April 2026 00:42:02 +0000 (0:00:00.339) 0:00:25.909 ******* 2026-04-08 00:42:11.488165 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_706accd8-4e49-4054-bb21-fde08475a707) 2026-04-08 00:42:11.488175 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_706accd8-4e49-4054-bb21-fde08475a707) 2026-04-08 00:42:11.488185 | orchestrator | 2026-04-08 00:42:11.488213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:11.488223 | orchestrator | Wednesday 08 April 2026 00:42:02 +0000 (0:00:00.399) 0:00:26.309 ******* 2026-04-08 00:42:11.488234 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f8a75de5-2ee8-4f26-b825-06a074879466) 2026-04-08 00:42:11.488244 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f8a75de5-2ee8-4f26-b825-06a074879466) 2026-04-08 00:42:11.488254 | orchestrator | 2026-04-08 00:42:11.488263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:11.488273 | orchestrator | Wednesday 08 April 2026 00:42:03 +0000 (0:00:00.429) 0:00:26.738 ******* 2026-04-08 00:42:11.488283 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5c872331-8a67-44e1-93cf-3b447520d047) 2026-04-08 00:42:11.488319 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5c872331-8a67-44e1-93cf-3b447520d047) 2026-04-08 00:42:11.488330 | orchestrator | 2026-04-08 00:42:11.488341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:11.488351 | orchestrator | Wednesday 08 April 2026 00:42:03 +0000 (0:00:00.417) 0:00:27.156 ******* 2026-04-08 00:42:11.488361 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-08 00:42:11.488371 | orchestrator | 2026-04-08 00:42:11.488380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:11.488390 | orchestrator | Wednesday 08 April 2026 00:42:03 +0000 (0:00:00.338) 0:00:27.495 ******* 2026-04-08 00:42:11.488400 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-08 00:42:11.488412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-08 00:42:11.488421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-08 00:42:11.488430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-08 00:42:11.488441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-08 00:42:11.488451 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-08 00:42:11.488459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-08 00:42:11.488469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-08 00:42:11.488478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-08 00:42:11.488486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-08 00:42:11.488496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-08 00:42:11.488505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-08 00:42:11.488513 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-08 00:42:11.488522 | orchestrator | 2026-04-08 00:42:11.488531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:11.488540 | orchestrator | Wednesday 08 April 2026 00:42:04 +0000 (0:00:00.546) 0:00:28.041 ******* 2026-04-08 00:42:11.488549 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.488558 | orchestrator | 2026-04-08 00:42:11.488567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:11.488577 | orchestrator | Wednesday 08 April 2026 00:42:04 +0000 (0:00:00.186) 0:00:28.228 ******* 2026-04-08 00:42:11.488586 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.488595 | orchestrator | 2026-04-08 00:42:11.488605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:11.488614 | orchestrator | Wednesday 08 April 2026 00:42:04 +0000 (0:00:00.204) 0:00:28.432 ******* 2026-04-08 00:42:11.488622 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.488631 | orchestrator | 2026-04-08 00:42:11.488660 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:11.488674 | orchestrator | Wednesday 08 April 2026 00:42:05 +0000 (0:00:00.222) 0:00:28.655 ******* 2026-04-08 00:42:11.488682 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.488691 | orchestrator | 2026-04-08 00:42:11.488700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:11.488708 | orchestrator | Wednesday 08 April 2026 00:42:05 +0000 (0:00:00.213) 0:00:28.869 ******* 2026-04-08 00:42:11.488716 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.488726 | orchestrator | 2026-04-08 00:42:11.488735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:11.488754 | orchestrator | Wednesday 08 April 2026 00:42:05 +0000 (0:00:00.178) 0:00:29.048 ******* 2026-04-08 00:42:11.488763 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.488772 | orchestrator | 2026-04-08 00:42:11.488781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:11.488791 | orchestrator | Wednesday 08 April 2026 00:42:05 +0000 (0:00:00.144) 0:00:29.192 ******* 2026-04-08 00:42:11.488802 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.488811 | orchestrator | 2026-04-08 00:42:11.488820 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:11.488830 | orchestrator | Wednesday 08 April 2026 00:42:05 +0000 (0:00:00.198) 0:00:29.391 ******* 2026-04-08 00:42:11.488840 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.488849 | orchestrator | 2026-04-08 00:42:11.488883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:11.488901 | orchestrator | Wednesday 08 April 2026 00:42:05 +0000 (0:00:00.165) 0:00:29.556 ******* 2026-04-08 00:42:11.488910 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-08 00:42:11.488919 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-08 00:42:11.488928 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-08 00:42:11.488937 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-08 00:42:11.488946 | orchestrator | 2026-04-08 00:42:11.488954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:11.488963 | orchestrator | Wednesday 08 April 2026 00:42:06 +0000 (0:00:00.769) 0:00:30.326 ******* 2026-04-08 00:42:11.488971 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.488980 | orchestrator | 2026-04-08 00:42:11.488990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:11.488999 | orchestrator | Wednesday 08 April 2026 00:42:06 +0000 (0:00:00.177) 0:00:30.504 ******* 2026-04-08 00:42:11.489008 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.489016 | orchestrator | 2026-04-08 00:42:11.489025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:11.489034 | orchestrator | Wednesday 08 April 2026 00:42:07 +0000 (0:00:00.157) 0:00:30.661 ******* 2026-04-08 00:42:11.489043 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.489052 | orchestrator | 2026-04-08 00:42:11.489061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:11.489070 | orchestrator | Wednesday 08 April 2026 00:42:07 +0000 (0:00:00.634) 0:00:31.296 ******* 2026-04-08 00:42:11.489079 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.489088 | orchestrator | 2026-04-08 00:42:11.489096 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-08 00:42:11.489104 | orchestrator | Wednesday 08 April 2026 00:42:07 +0000 (0:00:00.194) 0:00:31.490 ******* 2026-04-08 00:42:11.489112 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.489121 | orchestrator | 2026-04-08 00:42:11.489130 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-08 00:42:11.489139 | orchestrator | Wednesday 08 April 2026 00:42:08 +0000 (0:00:00.137) 0:00:31.628 ******* 2026-04-08 00:42:11.489148 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '31d7fbda-737c-5413-835b-7dea8c782162'}}) 2026-04-08 00:42:11.489158 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6d74f3d8-bff6-5917-9df4-f8420d533035'}}) 2026-04-08 00:42:11.489167 | orchestrator | 2026-04-08 00:42:11.489176 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-08 00:42:11.489185 | orchestrator | Wednesday 08 April 2026 00:42:08 +0000 (0:00:00.192) 0:00:31.821 ******* 2026-04-08 00:42:11.489197 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'}) 2026-04-08 00:42:11.489207 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'}) 2026-04-08 00:42:11.489227 | orchestrator | 2026-04-08 00:42:11.489240 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-08 00:42:11.489249 | orchestrator | Wednesday 08 April 2026 00:42:10 +0000 (0:00:01.822) 0:00:33.643 ******* 2026-04-08 00:42:11.489258 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:11.489269 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:11.489278 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:11.489287 | orchestrator | 2026-04-08 00:42:11.489296 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-08 00:42:11.489304 | orchestrator | Wednesday 08 April 2026 00:42:10 +0000 (0:00:00.148) 0:00:33.791 ******* 2026-04-08 00:42:11.489314 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'}) 2026-04-08 00:42:11.489333 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'}) 2026-04-08 00:42:16.825784 | orchestrator | 2026-04-08 00:42:16.825944 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-08 00:42:16.825963 | orchestrator | Wednesday 08 April 2026 00:42:11 +0000 (0:00:01.412) 0:00:35.204 ******* 2026-04-08 00:42:16.825975 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:16.825987 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:16.825997 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.826007 | orchestrator | 2026-04-08 00:42:16.826059 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-08 00:42:16.826070 | orchestrator | Wednesday 08 April 2026 00:42:11 +0000 (0:00:00.130) 0:00:35.335 ******* 2026-04-08 00:42:16.826080 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.826090 | orchestrator | 2026-04-08 00:42:16.826099 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-08 00:42:16.826109 | orchestrator | Wednesday 08 April 2026 00:42:11 +0000 (0:00:00.131) 0:00:35.466 ******* 2026-04-08 00:42:16.826120 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:16.826130 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:16.826140 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.826150 | orchestrator | 2026-04-08 00:42:16.826160 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-08 00:42:16.826170 | orchestrator | Wednesday 08 April 2026 00:42:11 +0000 (0:00:00.138) 0:00:35.604 ******* 2026-04-08 00:42:16.826180 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.826189 | orchestrator | 2026-04-08 00:42:16.826199 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-08 00:42:16.826209 | orchestrator | Wednesday 08 April 2026 00:42:12 +0000 (0:00:00.136) 0:00:35.741 ******* 2026-04-08 00:42:16.826219 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:16.826229 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:16.826259 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.826270 | orchestrator | 2026-04-08 00:42:16.826279 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-08 00:42:16.826289 | orchestrator | Wednesday 08 April 2026 00:42:12 +0000 (0:00:00.152) 0:00:35.894 ******* 2026-04-08 00:42:16.826299 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.826310 | orchestrator | 2026-04-08 00:42:16.826337 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-08 00:42:16.826349 | orchestrator | Wednesday 08 April 2026 00:42:12 +0000 (0:00:00.281) 0:00:36.175 ******* 2026-04-08 00:42:16.826360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:16.826372 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:16.826389 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.826407 | orchestrator | 2026-04-08 00:42:16.826423 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-08 00:42:16.826438 | orchestrator | Wednesday 08 April 2026 00:42:12 +0000 (0:00:00.129) 0:00:36.305 ******* 2026-04-08 00:42:16.826454 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:42:16.826472 | orchestrator | 2026-04-08 00:42:16.826489 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-08 00:42:16.826506 | orchestrator | Wednesday 08 April 2026 00:42:12 +0000 (0:00:00.135) 0:00:36.440 ******* 2026-04-08 00:42:16.826523 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:16.826539 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:16.826556 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.826572 | orchestrator | 2026-04-08 00:42:16.826588 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-08 00:42:16.826606 | orchestrator | Wednesday 08 April 2026 00:42:12 +0000 (0:00:00.151) 0:00:36.592 ******* 2026-04-08 00:42:16.826622 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:16.826641 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:16.826659 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.826671 | orchestrator | 2026-04-08 00:42:16.826681 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-08 00:42:16.826710 | orchestrator | Wednesday 08 April 2026 00:42:13 +0000 (0:00:00.154) 0:00:36.747 ******* 2026-04-08 00:42:16.826721 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:16.826731 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:16.826740 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.826750 | orchestrator | 2026-04-08 00:42:16.826766 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-08 00:42:16.826782 | orchestrator | Wednesday 08 April 2026 00:42:13 +0000 (0:00:00.171) 0:00:36.918 ******* 2026-04-08 00:42:16.826799 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.826815 | orchestrator | 2026-04-08 00:42:16.826832 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-08 00:42:16.826848 | orchestrator | Wednesday 08 April 2026 00:42:13 +0000 (0:00:00.137) 0:00:37.056 ******* 2026-04-08 00:42:16.826902 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.826919 | orchestrator | 2026-04-08 00:42:16.826936 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-08 00:42:16.826960 | orchestrator | Wednesday 08 April 2026 00:42:13 +0000 (0:00:00.153) 0:00:37.209 ******* 2026-04-08 00:42:16.826978 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.826990 | orchestrator | 2026-04-08 00:42:16.827000 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-08 00:42:16.827009 | orchestrator | Wednesday 08 April 2026 00:42:13 +0000 (0:00:00.123) 0:00:37.333 ******* 2026-04-08 00:42:16.827019 | orchestrator | ok: [testbed-node-4] => { 2026-04-08 00:42:16.827029 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-08 00:42:16.827039 | orchestrator | } 2026-04-08 00:42:16.827049 | orchestrator | 2026-04-08 00:42:16.827058 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-08 00:42:16.827067 | orchestrator | Wednesday 08 April 2026 00:42:13 +0000 (0:00:00.128) 0:00:37.461 ******* 2026-04-08 00:42:16.827077 | orchestrator | ok: [testbed-node-4] => { 2026-04-08 00:42:16.827086 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-08 00:42:16.827096 | orchestrator | } 2026-04-08 00:42:16.827106 | orchestrator | 2026-04-08 00:42:16.827116 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-08 00:42:16.827125 | orchestrator | Wednesday 08 April 2026 00:42:13 +0000 (0:00:00.130) 0:00:37.592 ******* 2026-04-08 00:42:16.827135 | orchestrator | ok: [testbed-node-4] => { 2026-04-08 00:42:16.827144 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-08 00:42:16.827154 | orchestrator | } 2026-04-08 00:42:16.827164 | orchestrator | 2026-04-08 00:42:16.827173 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-08 00:42:16.827183 | orchestrator | Wednesday 08 April 2026 00:42:14 +0000 (0:00:00.134) 0:00:37.727 ******* 2026-04-08 00:42:16.827192 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:42:16.827202 | orchestrator | 2026-04-08 00:42:16.827211 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-08 00:42:16.827220 | orchestrator | Wednesday 08 April 2026 00:42:14 +0000 (0:00:00.678) 0:00:38.405 ******* 2026-04-08 00:42:16.827230 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:42:16.827239 | orchestrator | 2026-04-08 00:42:16.827249 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-08 00:42:16.827259 | orchestrator | Wednesday 08 April 2026 00:42:15 +0000 (0:00:00.510) 0:00:38.916 ******* 2026-04-08 00:42:16.827268 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:42:16.827277 | orchestrator | 2026-04-08 00:42:16.827287 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-08 00:42:16.827299 | orchestrator | Wednesday 08 April 2026 00:42:15 +0000 (0:00:00.523) 0:00:39.439 ******* 2026-04-08 00:42:16.827314 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:42:16.827329 | orchestrator | 2026-04-08 00:42:16.827354 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-08 00:42:16.827372 | orchestrator | Wednesday 08 April 2026 00:42:15 +0000 (0:00:00.134) 0:00:39.574 ******* 2026-04-08 00:42:16.827388 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.827404 | orchestrator | 2026-04-08 00:42:16.827419 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-08 00:42:16.827435 | orchestrator | Wednesday 08 April 2026 00:42:16 +0000 (0:00:00.098) 0:00:39.672 ******* 2026-04-08 00:42:16.827448 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.827461 | orchestrator | 2026-04-08 00:42:16.827476 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-08 00:42:16.827492 | orchestrator | Wednesday 08 April 2026 00:42:16 +0000 (0:00:00.097) 0:00:39.769 ******* 2026-04-08 00:42:16.827507 | orchestrator | ok: [testbed-node-4] => { 2026-04-08 00:42:16.827522 | orchestrator |  "vgs_report": { 2026-04-08 00:42:16.827539 | orchestrator |  "vg": [] 2026-04-08 00:42:16.827555 | orchestrator |  } 2026-04-08 00:42:16.827571 | orchestrator | } 2026-04-08 00:42:16.827600 | orchestrator | 2026-04-08 00:42:16.827618 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-08 00:42:16.827634 | orchestrator | Wednesday 08 April 2026 00:42:16 +0000 (0:00:00.141) 0:00:39.911 ******* 2026-04-08 00:42:16.827648 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.827658 | orchestrator | 2026-04-08 00:42:16.827668 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-08 00:42:16.827677 | orchestrator | Wednesday 08 April 2026 00:42:16 +0000 (0:00:00.127) 0:00:40.038 ******* 2026-04-08 00:42:16.827687 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.827696 | orchestrator | 2026-04-08 00:42:16.827706 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-08 00:42:16.827715 | orchestrator | Wednesday 08 April 2026 00:42:16 +0000 (0:00:00.132) 0:00:40.170 ******* 2026-04-08 00:42:16.827725 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.827734 | orchestrator | 2026-04-08 00:42:16.827743 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-08 00:42:16.827753 | orchestrator | Wednesday 08 April 2026 00:42:16 +0000 (0:00:00.139) 0:00:40.310 ******* 2026-04-08 00:42:16.827763 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:16.827772 | orchestrator | 2026-04-08 00:42:16.827795 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-08 00:42:21.146483 | orchestrator | Wednesday 08 April 2026 00:42:16 +0000 (0:00:00.131) 0:00:40.441 ******* 2026-04-08 00:42:21.146559 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.146567 | orchestrator | 2026-04-08 00:42:21.146572 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-08 00:42:21.146577 | orchestrator | Wednesday 08 April 2026 00:42:16 +0000 (0:00:00.125) 0:00:40.567 ******* 2026-04-08 00:42:21.146581 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.146585 | orchestrator | 2026-04-08 00:42:21.146589 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-08 00:42:21.146593 | orchestrator | Wednesday 08 April 2026 00:42:17 +0000 (0:00:00.262) 0:00:40.829 ******* 2026-04-08 00:42:21.146597 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.146600 | orchestrator | 2026-04-08 00:42:21.146604 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-08 00:42:21.146608 | orchestrator | Wednesday 08 April 2026 00:42:17 +0000 (0:00:00.124) 0:00:40.954 ******* 2026-04-08 00:42:21.146612 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.146616 | orchestrator | 2026-04-08 00:42:21.146620 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-08 00:42:21.146624 | orchestrator | Wednesday 08 April 2026 00:42:17 +0000 (0:00:00.139) 0:00:41.093 ******* 2026-04-08 00:42:21.146639 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.146644 | orchestrator | 2026-04-08 00:42:21.146650 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-08 00:42:21.146656 | orchestrator | Wednesday 08 April 2026 00:42:17 +0000 (0:00:00.131) 0:00:41.225 ******* 2026-04-08 00:42:21.146662 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.146668 | orchestrator | 2026-04-08 00:42:21.146674 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-08 00:42:21.146680 | orchestrator | Wednesday 08 April 2026 00:42:17 +0000 (0:00:00.128) 0:00:41.353 ******* 2026-04-08 00:42:21.146686 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.146692 | orchestrator | 2026-04-08 00:42:21.146698 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-08 00:42:21.146705 | orchestrator | Wednesday 08 April 2026 00:42:17 +0000 (0:00:00.119) 0:00:41.473 ******* 2026-04-08 00:42:21.146711 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.146717 | orchestrator | 2026-04-08 00:42:21.146723 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-08 00:42:21.146729 | orchestrator | Wednesday 08 April 2026 00:42:17 +0000 (0:00:00.113) 0:00:41.586 ******* 2026-04-08 00:42:21.146734 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.146763 | orchestrator | 2026-04-08 00:42:21.146770 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-08 00:42:21.146776 | orchestrator | Wednesday 08 April 2026 00:42:18 +0000 (0:00:00.125) 0:00:41.712 ******* 2026-04-08 00:42:21.146783 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.146789 | orchestrator | 2026-04-08 00:42:21.146792 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-08 00:42:21.146796 | orchestrator | Wednesday 08 April 2026 00:42:18 +0000 (0:00:00.111) 0:00:41.824 ******* 2026-04-08 00:42:21.146802 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:21.146810 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:21.146816 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.146822 | orchestrator | 2026-04-08 00:42:21.146828 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-08 00:42:21.146834 | orchestrator | Wednesday 08 April 2026 00:42:18 +0000 (0:00:00.129) 0:00:41.954 ******* 2026-04-08 00:42:21.146839 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:21.146892 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:21.146902 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.146908 | orchestrator | 2026-04-08 00:42:21.146914 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-08 00:42:21.146920 | orchestrator | Wednesday 08 April 2026 00:42:18 +0000 (0:00:00.135) 0:00:42.089 ******* 2026-04-08 00:42:21.146926 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:21.146932 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:21.146938 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.146944 | orchestrator | 2026-04-08 00:42:21.146951 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-08 00:42:21.146957 | orchestrator | Wednesday 08 April 2026 00:42:18 +0000 (0:00:00.144) 0:00:42.234 ******* 2026-04-08 00:42:21.146964 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:21.146972 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:21.146978 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.146984 | orchestrator | 2026-04-08 00:42:21.147006 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-08 00:42:21.147012 | orchestrator | Wednesday 08 April 2026 00:42:18 +0000 (0:00:00.332) 0:00:42.566 ******* 2026-04-08 00:42:21.147019 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:21.147025 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:21.147031 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.147036 | orchestrator | 2026-04-08 00:42:21.147043 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-08 00:42:21.147049 | orchestrator | Wednesday 08 April 2026 00:42:19 +0000 (0:00:00.171) 0:00:42.737 ******* 2026-04-08 00:42:21.147064 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:21.147070 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:21.147076 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.147083 | orchestrator | 2026-04-08 00:42:21.147089 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-08 00:42:21.147096 | orchestrator | Wednesday 08 April 2026 00:42:19 +0000 (0:00:00.139) 0:00:42.878 ******* 2026-04-08 00:42:21.147103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:21.147109 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:21.147115 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.147122 | orchestrator | 2026-04-08 00:42:21.147128 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-08 00:42:21.147135 | orchestrator | Wednesday 08 April 2026 00:42:19 +0000 (0:00:00.153) 0:00:43.031 ******* 2026-04-08 00:42:21.147141 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:21.147148 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:21.147154 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.147161 | orchestrator | 2026-04-08 00:42:21.147167 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-08 00:42:21.147174 | orchestrator | Wednesday 08 April 2026 00:42:19 +0000 (0:00:00.131) 0:00:43.162 ******* 2026-04-08 00:42:21.147181 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:42:21.147188 | orchestrator | 2026-04-08 00:42:21.147195 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-08 00:42:21.147201 | orchestrator | Wednesday 08 April 2026 00:42:20 +0000 (0:00:00.501) 0:00:43.664 ******* 2026-04-08 00:42:21.147208 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:42:21.147214 | orchestrator | 2026-04-08 00:42:21.147220 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-08 00:42:21.147226 | orchestrator | Wednesday 08 April 2026 00:42:20 +0000 (0:00:00.480) 0:00:44.144 ******* 2026-04-08 00:42:21.147231 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:42:21.147238 | orchestrator | 2026-04-08 00:42:21.147244 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-08 00:42:21.147251 | orchestrator | Wednesday 08 April 2026 00:42:20 +0000 (0:00:00.157) 0:00:44.302 ******* 2026-04-08 00:42:21.147259 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'vg_name': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'}) 2026-04-08 00:42:21.147267 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'vg_name': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'}) 2026-04-08 00:42:21.147274 | orchestrator | 2026-04-08 00:42:21.147281 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-08 00:42:21.147288 | orchestrator | Wednesday 08 April 2026 00:42:20 +0000 (0:00:00.206) 0:00:44.509 ******* 2026-04-08 00:42:21.147295 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:21.147338 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:21.147347 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:21.147360 | orchestrator | 2026-04-08 00:42:21.147366 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-08 00:42:21.147372 | orchestrator | Wednesday 08 April 2026 00:42:21 +0000 (0:00:00.181) 0:00:44.690 ******* 2026-04-08 00:42:21.147378 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:21.147394 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:26.804276 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:26.804357 | orchestrator | 2026-04-08 00:42:26.804368 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-08 00:42:26.804377 | orchestrator | Wednesday 08 April 2026 00:42:21 +0000 (0:00:00.178) 0:00:44.868 ******* 2026-04-08 00:42:26.804387 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'})  2026-04-08 00:42:26.804396 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'})  2026-04-08 00:42:26.804404 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:26.804411 | orchestrator | 2026-04-08 00:42:26.804417 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-08 00:42:26.804424 | orchestrator | Wednesday 08 April 2026 00:42:21 +0000 (0:00:00.146) 0:00:45.015 ******* 2026-04-08 00:42:26.804430 | orchestrator | ok: [testbed-node-4] => { 2026-04-08 00:42:26.804436 | orchestrator |  "lvm_report": { 2026-04-08 00:42:26.804444 | orchestrator |  "lv": [ 2026-04-08 00:42:26.804466 | orchestrator |  { 2026-04-08 00:42:26.804472 | orchestrator |  "lv_name": "osd-block-31d7fbda-737c-5413-835b-7dea8c782162", 2026-04-08 00:42:26.804477 | orchestrator |  "vg_name": "ceph-31d7fbda-737c-5413-835b-7dea8c782162" 2026-04-08 00:42:26.804481 | orchestrator |  }, 2026-04-08 00:42:26.804485 | orchestrator |  { 2026-04-08 00:42:26.804489 | orchestrator |  "lv_name": "osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035", 2026-04-08 00:42:26.804493 | orchestrator |  "vg_name": "ceph-6d74f3d8-bff6-5917-9df4-f8420d533035" 2026-04-08 00:42:26.804497 | orchestrator |  } 2026-04-08 00:42:26.804500 | orchestrator |  ], 2026-04-08 00:42:26.804504 | orchestrator |  "pv": [ 2026-04-08 00:42:26.804508 | orchestrator |  { 2026-04-08 00:42:26.804512 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-08 00:42:26.804516 | orchestrator |  "vg_name": "ceph-31d7fbda-737c-5413-835b-7dea8c782162" 2026-04-08 00:42:26.804519 | orchestrator |  }, 2026-04-08 00:42:26.804523 | orchestrator |  { 2026-04-08 00:42:26.804527 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-08 00:42:26.804531 | orchestrator |  "vg_name": "ceph-6d74f3d8-bff6-5917-9df4-f8420d533035" 2026-04-08 00:42:26.804535 | orchestrator |  } 2026-04-08 00:42:26.804539 | orchestrator |  ] 2026-04-08 00:42:26.804543 | orchestrator |  } 2026-04-08 00:42:26.804547 | orchestrator | } 2026-04-08 00:42:26.804551 | orchestrator | 2026-04-08 00:42:26.804555 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-08 00:42:26.804559 | orchestrator | 2026-04-08 00:42:26.804562 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-08 00:42:26.804566 | orchestrator | Wednesday 08 April 2026 00:42:21 +0000 (0:00:00.445) 0:00:45.460 ******* 2026-04-08 00:42:26.804570 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-08 00:42:26.804574 | orchestrator | 2026-04-08 00:42:26.804578 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-08 00:42:26.804582 | orchestrator | Wednesday 08 April 2026 00:42:22 +0000 (0:00:00.254) 0:00:45.715 ******* 2026-04-08 00:42:26.804600 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:42:26.804604 | orchestrator | 2026-04-08 00:42:26.804608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:26.804612 | orchestrator | Wednesday 08 April 2026 00:42:22 +0000 (0:00:00.219) 0:00:45.935 ******* 2026-04-08 00:42:26.804615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-08 00:42:26.804619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-08 00:42:26.804623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-08 00:42:26.804629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-08 00:42:26.804633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-08 00:42:26.804637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-08 00:42:26.804640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-08 00:42:26.804644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-08 00:42:26.804648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-08 00:42:26.804652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-08 00:42:26.804656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-08 00:42:26.804659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-08 00:42:26.804663 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-08 00:42:26.804667 | orchestrator | 2026-04-08 00:42:26.804671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:26.804674 | orchestrator | Wednesday 08 April 2026 00:42:22 +0000 (0:00:00.411) 0:00:46.346 ******* 2026-04-08 00:42:26.804678 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:26.804682 | orchestrator | 2026-04-08 00:42:26.804686 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:26.804690 | orchestrator | Wednesday 08 April 2026 00:42:22 +0000 (0:00:00.176) 0:00:46.522 ******* 2026-04-08 00:42:26.804693 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:26.804697 | orchestrator | 2026-04-08 00:42:26.804701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:26.804716 | orchestrator | Wednesday 08 April 2026 00:42:23 +0000 (0:00:00.177) 0:00:46.700 ******* 2026-04-08 00:42:26.804721 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:26.804725 | orchestrator | 2026-04-08 00:42:26.804728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:26.804732 | orchestrator | Wednesday 08 April 2026 00:42:23 +0000 (0:00:00.183) 0:00:46.884 ******* 2026-04-08 00:42:26.804736 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:26.804739 | orchestrator | 2026-04-08 00:42:26.804743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:26.804747 | orchestrator | Wednesday 08 April 2026 00:42:23 +0000 (0:00:00.180) 0:00:47.064 ******* 2026-04-08 00:42:26.804751 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:26.804754 | orchestrator | 2026-04-08 00:42:26.804758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:26.804762 | orchestrator | Wednesday 08 April 2026 00:42:23 +0000 (0:00:00.192) 0:00:47.257 ******* 2026-04-08 00:42:26.804766 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:26.804770 | orchestrator | 2026-04-08 00:42:26.804773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:26.804780 | orchestrator | Wednesday 08 April 2026 00:42:24 +0000 (0:00:00.538) 0:00:47.796 ******* 2026-04-08 00:42:26.804784 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:26.804791 | orchestrator | 2026-04-08 00:42:26.804794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:26.804798 | orchestrator | Wednesday 08 April 2026 00:42:24 +0000 (0:00:00.198) 0:00:47.994 ******* 2026-04-08 00:42:26.804802 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:26.804806 | orchestrator | 2026-04-08 00:42:26.804809 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:26.804813 | orchestrator | Wednesday 08 April 2026 00:42:24 +0000 (0:00:00.188) 0:00:48.183 ******* 2026-04-08 00:42:26.804817 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb) 2026-04-08 00:42:26.804821 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb) 2026-04-08 00:42:26.804825 | orchestrator | 2026-04-08 00:42:26.804829 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:26.804833 | orchestrator | Wednesday 08 April 2026 00:42:24 +0000 (0:00:00.370) 0:00:48.553 ******* 2026-04-08 00:42:26.804836 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bf03eb4f-be44-4071-9b80-940b5dcac70f) 2026-04-08 00:42:26.804878 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bf03eb4f-be44-4071-9b80-940b5dcac70f) 2026-04-08 00:42:26.804883 | orchestrator | 2026-04-08 00:42:26.804888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:26.804892 | orchestrator | Wednesday 08 April 2026 00:42:25 +0000 (0:00:00.393) 0:00:48.946 ******* 2026-04-08 00:42:26.804897 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6d0a5819-af6a-4d5a-b5d8-55d4de9ca567) 2026-04-08 00:42:26.804901 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6d0a5819-af6a-4d5a-b5d8-55d4de9ca567) 2026-04-08 00:42:26.804905 | orchestrator | 2026-04-08 00:42:26.804910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:26.804914 | orchestrator | Wednesday 08 April 2026 00:42:25 +0000 (0:00:00.403) 0:00:49.350 ******* 2026-04-08 00:42:26.804919 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0911be4c-6cd6-4ed2-95f2-3749c0002df5) 2026-04-08 00:42:26.804923 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0911be4c-6cd6-4ed2-95f2-3749c0002df5) 2026-04-08 00:42:26.804928 | orchestrator | 2026-04-08 00:42:26.804932 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:42:26.804937 | orchestrator | Wednesday 08 April 2026 00:42:26 +0000 (0:00:00.420) 0:00:49.770 ******* 2026-04-08 00:42:26.804941 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-08 00:42:26.804946 | orchestrator | 2026-04-08 00:42:26.804950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:26.804955 | orchestrator | Wednesday 08 April 2026 00:42:26 +0000 (0:00:00.327) 0:00:50.098 ******* 2026-04-08 00:42:26.804960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-08 00:42:26.804966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-08 00:42:26.804973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-08 00:42:26.804979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-08 00:42:26.804986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-08 00:42:26.804992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-08 00:42:26.805001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-08 00:42:26.805007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-08 00:42:26.805013 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-08 00:42:26.805025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-08 00:42:26.805035 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-08 00:42:26.805049 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-08 00:42:35.445225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-08 00:42:35.445322 | orchestrator | 2026-04-08 00:42:35.445334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:35.445343 | orchestrator | Wednesday 08 April 2026 00:42:26 +0000 (0:00:00.404) 0:00:50.503 ******* 2026-04-08 00:42:35.445351 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.445359 | orchestrator | 2026-04-08 00:42:35.445367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:35.445382 | orchestrator | Wednesday 08 April 2026 00:42:27 +0000 (0:00:00.210) 0:00:50.713 ******* 2026-04-08 00:42:35.445390 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.445397 | orchestrator | 2026-04-08 00:42:35.445405 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:35.445412 | orchestrator | Wednesday 08 April 2026 00:42:27 +0000 (0:00:00.227) 0:00:50.941 ******* 2026-04-08 00:42:35.445419 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.445427 | orchestrator | 2026-04-08 00:42:35.445434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:35.445456 | orchestrator | Wednesday 08 April 2026 00:42:27 +0000 (0:00:00.577) 0:00:51.519 ******* 2026-04-08 00:42:35.445464 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.445472 | orchestrator | 2026-04-08 00:42:35.445479 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:35.445486 | orchestrator | Wednesday 08 April 2026 00:42:28 +0000 (0:00:00.195) 0:00:51.715 ******* 2026-04-08 00:42:35.445493 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.445501 | orchestrator | 2026-04-08 00:42:35.445508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:35.445515 | orchestrator | Wednesday 08 April 2026 00:42:28 +0000 (0:00:00.191) 0:00:51.906 ******* 2026-04-08 00:42:35.445522 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.445530 | orchestrator | 2026-04-08 00:42:35.445537 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:35.445544 | orchestrator | Wednesday 08 April 2026 00:42:28 +0000 (0:00:00.221) 0:00:52.128 ******* 2026-04-08 00:42:35.445551 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.445559 | orchestrator | 2026-04-08 00:42:35.445566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:35.445573 | orchestrator | Wednesday 08 April 2026 00:42:28 +0000 (0:00:00.193) 0:00:52.322 ******* 2026-04-08 00:42:35.445580 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.445588 | orchestrator | 2026-04-08 00:42:35.445595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:35.445602 | orchestrator | Wednesday 08 April 2026 00:42:28 +0000 (0:00:00.175) 0:00:52.498 ******* 2026-04-08 00:42:35.445610 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-08 00:42:35.445617 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-08 00:42:35.445625 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-08 00:42:35.445632 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-08 00:42:35.445639 | orchestrator | 2026-04-08 00:42:35.445647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:35.445654 | orchestrator | Wednesday 08 April 2026 00:42:29 +0000 (0:00:00.615) 0:00:53.114 ******* 2026-04-08 00:42:35.445661 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.445668 | orchestrator | 2026-04-08 00:42:35.445675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:35.445702 | orchestrator | Wednesday 08 April 2026 00:42:29 +0000 (0:00:00.199) 0:00:53.313 ******* 2026-04-08 00:42:35.445710 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.445717 | orchestrator | 2026-04-08 00:42:35.445724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:35.445732 | orchestrator | Wednesday 08 April 2026 00:42:29 +0000 (0:00:00.191) 0:00:53.505 ******* 2026-04-08 00:42:35.445739 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.445746 | orchestrator | 2026-04-08 00:42:35.445753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:42:35.445761 | orchestrator | Wednesday 08 April 2026 00:42:30 +0000 (0:00:00.198) 0:00:53.703 ******* 2026-04-08 00:42:35.445768 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.445777 | orchestrator | 2026-04-08 00:42:35.445785 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-08 00:42:35.445794 | orchestrator | Wednesday 08 April 2026 00:42:30 +0000 (0:00:00.202) 0:00:53.906 ******* 2026-04-08 00:42:35.445802 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.445810 | orchestrator | 2026-04-08 00:42:35.445819 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-08 00:42:35.445827 | orchestrator | Wednesday 08 April 2026 00:42:30 +0000 (0:00:00.282) 0:00:54.188 ******* 2026-04-08 00:42:35.445859 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd2a42094-2be0-50d9-ab62-bd2425088ba2'}}) 2026-04-08 00:42:35.445870 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed835e4d-3c58-59bb-af9d-6d23bfbc2494'}}) 2026-04-08 00:42:35.445878 | orchestrator | 2026-04-08 00:42:35.445887 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-08 00:42:35.445896 | orchestrator | Wednesday 08 April 2026 00:42:30 +0000 (0:00:00.214) 0:00:54.403 ******* 2026-04-08 00:42:35.445906 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'}) 2026-04-08 00:42:35.445915 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'}) 2026-04-08 00:42:35.445928 | orchestrator | 2026-04-08 00:42:35.445941 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-08 00:42:35.445970 | orchestrator | Wednesday 08 April 2026 00:42:32 +0000 (0:00:01.945) 0:00:56.348 ******* 2026-04-08 00:42:35.445982 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:35.445995 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:35.446007 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.446083 | orchestrator | 2026-04-08 00:42:35.446095 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-08 00:42:35.446104 | orchestrator | Wednesday 08 April 2026 00:42:32 +0000 (0:00:00.152) 0:00:56.500 ******* 2026-04-08 00:42:35.446113 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'}) 2026-04-08 00:42:35.446122 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'}) 2026-04-08 00:42:35.446130 | orchestrator | 2026-04-08 00:42:35.446139 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-08 00:42:35.446146 | orchestrator | Wednesday 08 April 2026 00:42:34 +0000 (0:00:01.303) 0:00:57.805 ******* 2026-04-08 00:42:35.446153 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:35.446169 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:35.446176 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.446183 | orchestrator | 2026-04-08 00:42:35.446190 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-08 00:42:35.446197 | orchestrator | Wednesday 08 April 2026 00:42:34 +0000 (0:00:00.136) 0:00:57.941 ******* 2026-04-08 00:42:35.446204 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.446211 | orchestrator | 2026-04-08 00:42:35.446218 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-08 00:42:35.446226 | orchestrator | Wednesday 08 April 2026 00:42:34 +0000 (0:00:00.127) 0:00:58.069 ******* 2026-04-08 00:42:35.446233 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:35.446240 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:35.446247 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.446254 | orchestrator | 2026-04-08 00:42:35.446261 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-08 00:42:35.446268 | orchestrator | Wednesday 08 April 2026 00:42:34 +0000 (0:00:00.144) 0:00:58.214 ******* 2026-04-08 00:42:35.446276 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.446283 | orchestrator | 2026-04-08 00:42:35.446290 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-08 00:42:35.446304 | orchestrator | Wednesday 08 April 2026 00:42:34 +0000 (0:00:00.142) 0:00:58.357 ******* 2026-04-08 00:42:35.446312 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:35.446319 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:35.446326 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.446333 | orchestrator | 2026-04-08 00:42:35.446340 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-08 00:42:35.446348 | orchestrator | Wednesday 08 April 2026 00:42:34 +0000 (0:00:00.138) 0:00:58.495 ******* 2026-04-08 00:42:35.446355 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.446362 | orchestrator | 2026-04-08 00:42:35.446369 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-08 00:42:35.446376 | orchestrator | Wednesday 08 April 2026 00:42:35 +0000 (0:00:00.178) 0:00:58.674 ******* 2026-04-08 00:42:35.446383 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:35.446390 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:35.446398 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:35.446408 | orchestrator | 2026-04-08 00:42:35.446420 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-08 00:42:35.446431 | orchestrator | Wednesday 08 April 2026 00:42:35 +0000 (0:00:00.181) 0:00:58.856 ******* 2026-04-08 00:42:35.446443 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:42:35.446454 | orchestrator | 2026-04-08 00:42:35.446466 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-08 00:42:35.446477 | orchestrator | Wednesday 08 April 2026 00:42:35 +0000 (0:00:00.134) 0:00:58.990 ******* 2026-04-08 00:42:35.446500 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:41.735757 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:41.735919 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.735938 | orchestrator | 2026-04-08 00:42:41.735949 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-08 00:42:41.735963 | orchestrator | Wednesday 08 April 2026 00:42:35 +0000 (0:00:00.326) 0:00:59.317 ******* 2026-04-08 00:42:41.735974 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:41.735985 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:41.735995 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.736005 | orchestrator | 2026-04-08 00:42:41.736031 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-08 00:42:41.736042 | orchestrator | Wednesday 08 April 2026 00:42:35 +0000 (0:00:00.186) 0:00:59.503 ******* 2026-04-08 00:42:41.736052 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:41.736063 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:41.736072 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.736082 | orchestrator | 2026-04-08 00:42:41.736093 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-08 00:42:41.736099 | orchestrator | Wednesday 08 April 2026 00:42:36 +0000 (0:00:00.124) 0:00:59.627 ******* 2026-04-08 00:42:41.736105 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.736112 | orchestrator | 2026-04-08 00:42:41.736118 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-08 00:42:41.736127 | orchestrator | Wednesday 08 April 2026 00:42:36 +0000 (0:00:00.128) 0:00:59.756 ******* 2026-04-08 00:42:41.736138 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.736148 | orchestrator | 2026-04-08 00:42:41.736158 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-08 00:42:41.736168 | orchestrator | Wednesday 08 April 2026 00:42:36 +0000 (0:00:00.113) 0:00:59.870 ******* 2026-04-08 00:42:41.736178 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.736189 | orchestrator | 2026-04-08 00:42:41.736199 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-08 00:42:41.736209 | orchestrator | Wednesday 08 April 2026 00:42:36 +0000 (0:00:00.140) 0:01:00.010 ******* 2026-04-08 00:42:41.736218 | orchestrator | ok: [testbed-node-5] => { 2026-04-08 00:42:41.736229 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-08 00:42:41.736240 | orchestrator | } 2026-04-08 00:42:41.736250 | orchestrator | 2026-04-08 00:42:41.736261 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-08 00:42:41.736272 | orchestrator | Wednesday 08 April 2026 00:42:36 +0000 (0:00:00.112) 0:01:00.122 ******* 2026-04-08 00:42:41.736283 | orchestrator | ok: [testbed-node-5] => { 2026-04-08 00:42:41.736290 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-08 00:42:41.736298 | orchestrator | } 2026-04-08 00:42:41.736305 | orchestrator | 2026-04-08 00:42:41.736313 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-08 00:42:41.736320 | orchestrator | Wednesday 08 April 2026 00:42:36 +0000 (0:00:00.129) 0:01:00.252 ******* 2026-04-08 00:42:41.736328 | orchestrator | ok: [testbed-node-5] => { 2026-04-08 00:42:41.736335 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-08 00:42:41.736343 | orchestrator | } 2026-04-08 00:42:41.736354 | orchestrator | 2026-04-08 00:42:41.736364 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-08 00:42:41.736375 | orchestrator | Wednesday 08 April 2026 00:42:36 +0000 (0:00:00.158) 0:01:00.410 ******* 2026-04-08 00:42:41.736409 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:42:41.736422 | orchestrator | 2026-04-08 00:42:41.736433 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-08 00:42:41.736444 | orchestrator | Wednesday 08 April 2026 00:42:37 +0000 (0:00:00.501) 0:01:00.912 ******* 2026-04-08 00:42:41.736455 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:42:41.736465 | orchestrator | 2026-04-08 00:42:41.736475 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-08 00:42:41.736485 | orchestrator | Wednesday 08 April 2026 00:42:37 +0000 (0:00:00.496) 0:01:01.409 ******* 2026-04-08 00:42:41.736497 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:42:41.736507 | orchestrator | 2026-04-08 00:42:41.736518 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-08 00:42:41.736528 | orchestrator | Wednesday 08 April 2026 00:42:38 +0000 (0:00:00.545) 0:01:01.954 ******* 2026-04-08 00:42:41.736539 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:42:41.736550 | orchestrator | 2026-04-08 00:42:41.736561 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-08 00:42:41.736571 | orchestrator | Wednesday 08 April 2026 00:42:38 +0000 (0:00:00.305) 0:01:02.260 ******* 2026-04-08 00:42:41.736578 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.736586 | orchestrator | 2026-04-08 00:42:41.736593 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-08 00:42:41.736600 | orchestrator | Wednesday 08 April 2026 00:42:38 +0000 (0:00:00.103) 0:01:02.363 ******* 2026-04-08 00:42:41.736607 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.736675 | orchestrator | 2026-04-08 00:42:41.736684 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-08 00:42:41.736691 | orchestrator | Wednesday 08 April 2026 00:42:38 +0000 (0:00:00.132) 0:01:02.495 ******* 2026-04-08 00:42:41.736699 | orchestrator | ok: [testbed-node-5] => { 2026-04-08 00:42:41.736707 | orchestrator |  "vgs_report": { 2026-04-08 00:42:41.736718 | orchestrator |  "vg": [] 2026-04-08 00:42:41.736747 | orchestrator |  } 2026-04-08 00:42:41.736758 | orchestrator | } 2026-04-08 00:42:41.736768 | orchestrator | 2026-04-08 00:42:41.736777 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-08 00:42:41.736787 | orchestrator | Wednesday 08 April 2026 00:42:39 +0000 (0:00:00.154) 0:01:02.650 ******* 2026-04-08 00:42:41.736797 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.736806 | orchestrator | 2026-04-08 00:42:41.736816 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-08 00:42:41.736826 | orchestrator | Wednesday 08 April 2026 00:42:39 +0000 (0:00:00.140) 0:01:02.791 ******* 2026-04-08 00:42:41.736892 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.736904 | orchestrator | 2026-04-08 00:42:41.736914 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-08 00:42:41.736924 | orchestrator | Wednesday 08 April 2026 00:42:39 +0000 (0:00:00.153) 0:01:02.944 ******* 2026-04-08 00:42:41.736933 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.736939 | orchestrator | 2026-04-08 00:42:41.736946 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-08 00:42:41.736960 | orchestrator | Wednesday 08 April 2026 00:42:39 +0000 (0:00:00.158) 0:01:03.103 ******* 2026-04-08 00:42:41.736966 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.736973 | orchestrator | 2026-04-08 00:42:41.736983 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-08 00:42:41.736993 | orchestrator | Wednesday 08 April 2026 00:42:39 +0000 (0:00:00.137) 0:01:03.240 ******* 2026-04-08 00:42:41.737004 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.737014 | orchestrator | 2026-04-08 00:42:41.737024 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-08 00:42:41.737033 | orchestrator | Wednesday 08 April 2026 00:42:39 +0000 (0:00:00.133) 0:01:03.374 ******* 2026-04-08 00:42:41.737043 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.737065 | orchestrator | 2026-04-08 00:42:41.737075 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-08 00:42:41.737086 | orchestrator | Wednesday 08 April 2026 00:42:39 +0000 (0:00:00.130) 0:01:03.504 ******* 2026-04-08 00:42:41.737095 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.737104 | orchestrator | 2026-04-08 00:42:41.737114 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-08 00:42:41.737124 | orchestrator | Wednesday 08 April 2026 00:42:40 +0000 (0:00:00.124) 0:01:03.629 ******* 2026-04-08 00:42:41.737134 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.737144 | orchestrator | 2026-04-08 00:42:41.737155 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-08 00:42:41.737165 | orchestrator | Wednesday 08 April 2026 00:42:40 +0000 (0:00:00.130) 0:01:03.759 ******* 2026-04-08 00:42:41.737175 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.737185 | orchestrator | 2026-04-08 00:42:41.737195 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-08 00:42:41.737205 | orchestrator | Wednesday 08 April 2026 00:42:40 +0000 (0:00:00.479) 0:01:04.239 ******* 2026-04-08 00:42:41.737216 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.737225 | orchestrator | 2026-04-08 00:42:41.737235 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-08 00:42:41.737246 | orchestrator | Wednesday 08 April 2026 00:42:40 +0000 (0:00:00.149) 0:01:04.388 ******* 2026-04-08 00:42:41.737256 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.737267 | orchestrator | 2026-04-08 00:42:41.737278 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-08 00:42:41.737288 | orchestrator | Wednesday 08 April 2026 00:42:40 +0000 (0:00:00.144) 0:01:04.532 ******* 2026-04-08 00:42:41.737299 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.737309 | orchestrator | 2026-04-08 00:42:41.737320 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-08 00:42:41.737329 | orchestrator | Wednesday 08 April 2026 00:42:41 +0000 (0:00:00.149) 0:01:04.681 ******* 2026-04-08 00:42:41.737339 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.737350 | orchestrator | 2026-04-08 00:42:41.737360 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-08 00:42:41.737370 | orchestrator | Wednesday 08 April 2026 00:42:41 +0000 (0:00:00.136) 0:01:04.818 ******* 2026-04-08 00:42:41.737381 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.737391 | orchestrator | 2026-04-08 00:42:41.737401 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-08 00:42:41.737411 | orchestrator | Wednesday 08 April 2026 00:42:41 +0000 (0:00:00.146) 0:01:04.965 ******* 2026-04-08 00:42:41.737420 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:41.737432 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:41.737442 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.737452 | orchestrator | 2026-04-08 00:42:41.737462 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-08 00:42:41.737472 | orchestrator | Wednesday 08 April 2026 00:42:41 +0000 (0:00:00.166) 0:01:05.131 ******* 2026-04-08 00:42:41.737482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:41.737493 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:41.737503 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:41.737514 | orchestrator | 2026-04-08 00:42:41.737524 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-08 00:42:41.737542 | orchestrator | Wednesday 08 April 2026 00:42:41 +0000 (0:00:00.150) 0:01:05.282 ******* 2026-04-08 00:42:41.737562 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:44.652320 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:44.652441 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:44.652462 | orchestrator | 2026-04-08 00:42:44.652481 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-08 00:42:44.652500 | orchestrator | Wednesday 08 April 2026 00:42:41 +0000 (0:00:00.158) 0:01:05.441 ******* 2026-04-08 00:42:44.652517 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:44.652554 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:44.652572 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:44.652589 | orchestrator | 2026-04-08 00:42:44.652605 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-08 00:42:44.652622 | orchestrator | Wednesday 08 April 2026 00:42:41 +0000 (0:00:00.156) 0:01:05.598 ******* 2026-04-08 00:42:44.652639 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:44.652656 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:44.652673 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:44.652690 | orchestrator | 2026-04-08 00:42:44.652707 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-08 00:42:44.652724 | orchestrator | Wednesday 08 April 2026 00:42:42 +0000 (0:00:00.173) 0:01:05.771 ******* 2026-04-08 00:42:44.652741 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:44.652758 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:44.652775 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:44.652792 | orchestrator | 2026-04-08 00:42:44.652808 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-08 00:42:44.652825 | orchestrator | Wednesday 08 April 2026 00:42:42 +0000 (0:00:00.161) 0:01:05.933 ******* 2026-04-08 00:42:44.652867 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:44.652883 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:44.652901 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:44.652919 | orchestrator | 2026-04-08 00:42:44.652938 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-08 00:42:44.652956 | orchestrator | Wednesday 08 April 2026 00:42:42 +0000 (0:00:00.293) 0:01:06.226 ******* 2026-04-08 00:42:44.652975 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:44.652994 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:44.653013 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:44.653062 | orchestrator | 2026-04-08 00:42:44.653081 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-08 00:42:44.653099 | orchestrator | Wednesday 08 April 2026 00:42:42 +0000 (0:00:00.143) 0:01:06.370 ******* 2026-04-08 00:42:44.653118 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:42:44.653136 | orchestrator | 2026-04-08 00:42:44.653152 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-08 00:42:44.653167 | orchestrator | Wednesday 08 April 2026 00:42:43 +0000 (0:00:00.520) 0:01:06.890 ******* 2026-04-08 00:42:44.653181 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:42:44.653194 | orchestrator | 2026-04-08 00:42:44.653208 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-08 00:42:44.653221 | orchestrator | Wednesday 08 April 2026 00:42:43 +0000 (0:00:00.498) 0:01:07.389 ******* 2026-04-08 00:42:44.653235 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:42:44.653249 | orchestrator | 2026-04-08 00:42:44.653263 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-08 00:42:44.653277 | orchestrator | Wednesday 08 April 2026 00:42:43 +0000 (0:00:00.144) 0:01:07.534 ******* 2026-04-08 00:42:44.653291 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'vg_name': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'}) 2026-04-08 00:42:44.653306 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'vg_name': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'}) 2026-04-08 00:42:44.653320 | orchestrator | 2026-04-08 00:42:44.653334 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-08 00:42:44.653348 | orchestrator | Wednesday 08 April 2026 00:42:44 +0000 (0:00:00.167) 0:01:07.702 ******* 2026-04-08 00:42:44.653379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:44.653393 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:44.653407 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:44.653421 | orchestrator | 2026-04-08 00:42:44.653434 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-08 00:42:44.653448 | orchestrator | Wednesday 08 April 2026 00:42:44 +0000 (0:00:00.136) 0:01:07.838 ******* 2026-04-08 00:42:44.653462 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:44.653476 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:44.653489 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:44.653503 | orchestrator | 2026-04-08 00:42:44.653517 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-08 00:42:44.653530 | orchestrator | Wednesday 08 April 2026 00:42:44 +0000 (0:00:00.151) 0:01:07.990 ******* 2026-04-08 00:42:44.653544 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'})  2026-04-08 00:42:44.653558 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'})  2026-04-08 00:42:44.653572 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:44.653585 | orchestrator | 2026-04-08 00:42:44.653599 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-08 00:42:44.653612 | orchestrator | Wednesday 08 April 2026 00:42:44 +0000 (0:00:00.133) 0:01:08.124 ******* 2026-04-08 00:42:44.653626 | orchestrator | ok: [testbed-node-5] => { 2026-04-08 00:42:44.653639 | orchestrator |  "lvm_report": { 2026-04-08 00:42:44.653654 | orchestrator |  "lv": [ 2026-04-08 00:42:44.653677 | orchestrator |  { 2026-04-08 00:42:44.653692 | orchestrator |  "lv_name": "osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2", 2026-04-08 00:42:44.653706 | orchestrator |  "vg_name": "ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2" 2026-04-08 00:42:44.653720 | orchestrator |  }, 2026-04-08 00:42:44.653733 | orchestrator |  { 2026-04-08 00:42:44.653747 | orchestrator |  "lv_name": "osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494", 2026-04-08 00:42:44.653761 | orchestrator |  "vg_name": "ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494" 2026-04-08 00:42:44.653774 | orchestrator |  } 2026-04-08 00:42:44.653788 | orchestrator |  ], 2026-04-08 00:42:44.653801 | orchestrator |  "pv": [ 2026-04-08 00:42:44.653815 | orchestrator |  { 2026-04-08 00:42:44.653853 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-08 00:42:44.653869 | orchestrator |  "vg_name": "ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2" 2026-04-08 00:42:44.653882 | orchestrator |  }, 2026-04-08 00:42:44.653896 | orchestrator |  { 2026-04-08 00:42:44.653909 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-08 00:42:44.653923 | orchestrator |  "vg_name": "ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494" 2026-04-08 00:42:44.653936 | orchestrator |  } 2026-04-08 00:42:44.653950 | orchestrator |  ] 2026-04-08 00:42:44.653963 | orchestrator |  } 2026-04-08 00:42:44.653977 | orchestrator | } 2026-04-08 00:42:44.653991 | orchestrator | 2026-04-08 00:42:44.654004 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:42:44.654083 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-08 00:42:44.654098 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-08 00:42:44.654112 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-08 00:42:44.654126 | orchestrator | 2026-04-08 00:42:44.654139 | orchestrator | 2026-04-08 00:42:44.654151 | orchestrator | 2026-04-08 00:42:44.654174 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:42:44.654187 | orchestrator | Wednesday 08 April 2026 00:42:44 +0000 (0:00:00.139) 0:01:08.263 ******* 2026-04-08 00:42:44.654198 | orchestrator | =============================================================================== 2026-04-08 00:42:44.654211 | orchestrator | Create block VGs -------------------------------------------------------- 5.68s 2026-04-08 00:42:44.654225 | orchestrator | Create block LVs -------------------------------------------------------- 4.10s 2026-04-08 00:42:44.654238 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.87s 2026-04-08 00:42:44.654252 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.59s 2026-04-08 00:42:44.654266 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.51s 2026-04-08 00:42:44.654279 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.51s 2026-04-08 00:42:44.654293 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2026-04-08 00:42:44.654307 | orchestrator | Add known partitions to the list of available block devices ------------- 1.36s 2026-04-08 00:42:44.654330 | orchestrator | Add known links to the list of available block devices ------------------ 1.21s 2026-04-08 00:42:44.931180 | orchestrator | Add known partitions to the list of available block devices ------------- 0.90s 2026-04-08 00:42:44.931283 | orchestrator | Print LVM report data --------------------------------------------------- 0.83s 2026-04-08 00:42:44.931298 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-04-08 00:42:44.931310 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.72s 2026-04-08 00:42:44.931321 | orchestrator | Print size needed for WAL LVs on ceph_db_wal_devices -------------------- 0.71s 2026-04-08 00:42:44.931358 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-04-08 00:42:44.931370 | orchestrator | Get initial list of available block devices ----------------------------- 0.66s 2026-04-08 00:42:44.931394 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.66s 2026-04-08 00:42:44.931406 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.64s 2026-04-08 00:42:44.931416 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.64s 2026-04-08 00:42:44.931427 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2026-04-08 00:42:56.317062 | orchestrator | 2026-04-08 00:42:56 | INFO  | Prepare task for execution of facts. 2026-04-08 00:42:56.395203 | orchestrator | 2026-04-08 00:42:56 | INFO  | Task de22a7b6-2760-43f0-a02d-c53c3da10a1d (facts) was prepared for execution. 2026-04-08 00:42:56.395280 | orchestrator | 2026-04-08 00:42:56 | INFO  | It takes a moment until task de22a7b6-2760-43f0-a02d-c53c3da10a1d (facts) has been started and output is visible here. 2026-04-08 00:43:08.562540 | orchestrator | 2026-04-08 00:43:08.562661 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-08 00:43:08.562680 | orchestrator | 2026-04-08 00:43:08.562693 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-08 00:43:08.562705 | orchestrator | Wednesday 08 April 2026 00:42:59 +0000 (0:00:00.324) 0:00:00.324 ******* 2026-04-08 00:43:08.562718 | orchestrator | ok: [testbed-manager] 2026-04-08 00:43:08.562769 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:43:08.562784 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:43:08.562795 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:43:08.562805 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:43:08.562870 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:43:08.562881 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:43:08.562892 | orchestrator | 2026-04-08 00:43:08.562902 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-08 00:43:08.562913 | orchestrator | Wednesday 08 April 2026 00:43:00 +0000 (0:00:01.338) 0:00:01.662 ******* 2026-04-08 00:43:08.562922 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:43:08.562933 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:43:08.562942 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:43:08.562952 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:43:08.562961 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:43:08.562970 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:43:08.562979 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:43:08.562988 | orchestrator | 2026-04-08 00:43:08.562997 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-08 00:43:08.563006 | orchestrator | 2026-04-08 00:43:08.563014 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-08 00:43:08.563023 | orchestrator | Wednesday 08 April 2026 00:43:02 +0000 (0:00:01.093) 0:00:02.756 ******* 2026-04-08 00:43:08.563033 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:43:08.563043 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:43:08.563052 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:43:08.563062 | orchestrator | ok: [testbed-manager] 2026-04-08 00:43:08.563072 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:43:08.563081 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:43:08.563090 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:43:08.563100 | orchestrator | 2026-04-08 00:43:08.563109 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-08 00:43:08.563118 | orchestrator | 2026-04-08 00:43:08.563130 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-08 00:43:08.563138 | orchestrator | Wednesday 08 April 2026 00:43:07 +0000 (0:00:05.734) 0:00:08.490 ******* 2026-04-08 00:43:08.563148 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:43:08.563156 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:43:08.563192 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:43:08.563201 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:43:08.563211 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:43:08.563219 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:43:08.563228 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:43:08.563237 | orchestrator | 2026-04-08 00:43:08.563246 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:43:08.563256 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:43:08.563267 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:43:08.563275 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:43:08.563284 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:43:08.563293 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:43:08.563303 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:43:08.563313 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:43:08.563323 | orchestrator | 2026-04-08 00:43:08.563332 | orchestrator | 2026-04-08 00:43:08.563340 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:43:08.563349 | orchestrator | Wednesday 08 April 2026 00:43:08 +0000 (0:00:00.543) 0:00:09.034 ******* 2026-04-08 00:43:08.563358 | orchestrator | =============================================================================== 2026-04-08 00:43:08.563367 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.73s 2026-04-08 00:43:08.563376 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.34s 2026-04-08 00:43:08.563400 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.09s 2026-04-08 00:43:08.563410 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-04-08 00:43:19.940215 | orchestrator | 2026-04-08 00:43:19 | INFO  | Prepare task for execution of frr. 2026-04-08 00:43:20.007158 | orchestrator | 2026-04-08 00:43:20 | INFO  | Task 0a355853-f02e-4445-92f7-6d5acff74253 (frr) was prepared for execution. 2026-04-08 00:43:20.007254 | orchestrator | 2026-04-08 00:43:20 | INFO  | It takes a moment until task 0a355853-f02e-4445-92f7-6d5acff74253 (frr) has been started and output is visible here. 2026-04-08 00:43:48.252222 | orchestrator | 2026-04-08 00:43:48.252307 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-08 00:43:48.252315 | orchestrator | 2026-04-08 00:43:48.252320 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-08 00:43:48.252325 | orchestrator | Wednesday 08 April 2026 00:43:23 +0000 (0:00:00.361) 0:00:00.361 ******* 2026-04-08 00:43:48.252330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-08 00:43:48.252336 | orchestrator | 2026-04-08 00:43:48.252341 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-08 00:43:48.252345 | orchestrator | Wednesday 08 April 2026 00:43:23 +0000 (0:00:00.256) 0:00:00.617 ******* 2026-04-08 00:43:48.252349 | orchestrator | changed: [testbed-manager] 2026-04-08 00:43:48.252354 | orchestrator | 2026-04-08 00:43:48.252358 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-08 00:43:48.252375 | orchestrator | Wednesday 08 April 2026 00:43:25 +0000 (0:00:01.642) 0:00:02.259 ******* 2026-04-08 00:43:48.252380 | orchestrator | changed: [testbed-manager] 2026-04-08 00:43:48.252384 | orchestrator | 2026-04-08 00:43:48.252388 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-08 00:43:48.252393 | orchestrator | Wednesday 08 April 2026 00:43:36 +0000 (0:00:10.813) 0:00:13.072 ******* 2026-04-08 00:43:48.252397 | orchestrator | ok: [testbed-manager] 2026-04-08 00:43:48.252401 | orchestrator | 2026-04-08 00:43:48.252406 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-08 00:43:48.252411 | orchestrator | Wednesday 08 April 2026 00:43:37 +0000 (0:00:01.043) 0:00:14.116 ******* 2026-04-08 00:43:48.252415 | orchestrator | changed: [testbed-manager] 2026-04-08 00:43:48.252419 | orchestrator | 2026-04-08 00:43:48.252423 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-08 00:43:48.252428 | orchestrator | Wednesday 08 April 2026 00:43:38 +0000 (0:00:01.022) 0:00:15.139 ******* 2026-04-08 00:43:48.252432 | orchestrator | ok: [testbed-manager] 2026-04-08 00:43:48.252436 | orchestrator | 2026-04-08 00:43:48.252440 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-08 00:43:48.252444 | orchestrator | Wednesday 08 April 2026 00:43:39 +0000 (0:00:01.365) 0:00:16.504 ******* 2026-04-08 00:43:48.252448 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:43:48.252452 | orchestrator | 2026-04-08 00:43:48.252457 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-08 00:43:48.252461 | orchestrator | Wednesday 08 April 2026 00:43:39 +0000 (0:00:00.166) 0:00:16.671 ******* 2026-04-08 00:43:48.252465 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:43:48.252469 | orchestrator | 2026-04-08 00:43:48.252473 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-08 00:43:48.252477 | orchestrator | Wednesday 08 April 2026 00:43:40 +0000 (0:00:00.325) 0:00:16.997 ******* 2026-04-08 00:43:48.252481 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:43:48.252485 | orchestrator | 2026-04-08 00:43:48.252489 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-08 00:43:48.252494 | orchestrator | Wednesday 08 April 2026 00:43:40 +0000 (0:00:00.197) 0:00:17.194 ******* 2026-04-08 00:43:48.252498 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:43:48.252502 | orchestrator | 2026-04-08 00:43:48.252506 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-08 00:43:48.252511 | orchestrator | Wednesday 08 April 2026 00:43:40 +0000 (0:00:00.178) 0:00:17.372 ******* 2026-04-08 00:43:48.252515 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:43:48.252519 | orchestrator | 2026-04-08 00:43:48.252523 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-08 00:43:48.252527 | orchestrator | Wednesday 08 April 2026 00:43:40 +0000 (0:00:00.157) 0:00:17.530 ******* 2026-04-08 00:43:48.252531 | orchestrator | changed: [testbed-manager] 2026-04-08 00:43:48.252535 | orchestrator | 2026-04-08 00:43:48.252539 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-08 00:43:48.252543 | orchestrator | Wednesday 08 April 2026 00:43:41 +0000 (0:00:01.045) 0:00:18.576 ******* 2026-04-08 00:43:48.252548 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-08 00:43:48.252552 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-08 00:43:48.252557 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-08 00:43:48.252561 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-08 00:43:48.252565 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-08 00:43:48.252570 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-08 00:43:48.252578 | orchestrator | 2026-04-08 00:43:48.252582 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-08 00:43:48.252595 | orchestrator | Wednesday 08 April 2026 00:43:45 +0000 (0:00:03.465) 0:00:22.041 ******* 2026-04-08 00:43:48.252600 | orchestrator | ok: [testbed-manager] 2026-04-08 00:43:48.252604 | orchestrator | 2026-04-08 00:43:48.252608 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-08 00:43:48.252612 | orchestrator | Wednesday 08 April 2026 00:43:46 +0000 (0:00:01.257) 0:00:23.299 ******* 2026-04-08 00:43:48.252616 | orchestrator | changed: [testbed-manager] 2026-04-08 00:43:48.252620 | orchestrator | 2026-04-08 00:43:48.252624 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:43:48.252629 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-08 00:43:48.252633 | orchestrator | 2026-04-08 00:43:48.252638 | orchestrator | 2026-04-08 00:43:48.252650 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:43:48.252655 | orchestrator | Wednesday 08 April 2026 00:43:47 +0000 (0:00:01.444) 0:00:24.744 ******* 2026-04-08 00:43:48.252659 | orchestrator | =============================================================================== 2026-04-08 00:43:48.252663 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.81s 2026-04-08 00:43:48.252667 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.47s 2026-04-08 00:43:48.252671 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.64s 2026-04-08 00:43:48.252675 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.44s 2026-04-08 00:43:48.252679 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.37s 2026-04-08 00:43:48.252683 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.26s 2026-04-08 00:43:48.252687 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.05s 2026-04-08 00:43:48.252691 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.04s 2026-04-08 00:43:48.252696 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.02s 2026-04-08 00:43:48.252700 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.33s 2026-04-08 00:43:48.252704 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.26s 2026-04-08 00:43:48.252708 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.20s 2026-04-08 00:43:48.252712 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.18s 2026-04-08 00:43:48.252716 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.17s 2026-04-08 00:43:48.252720 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-04-08 00:43:48.467245 | orchestrator | 2026-04-08 00:43:48.471464 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Apr 8 00:43:48 UTC 2026 2026-04-08 00:43:48.471532 | orchestrator | 2026-04-08 00:43:49.639330 | orchestrator | 2026-04-08 00:43:49 | INFO  | Collection nutshell is prepared for execution 2026-04-08 00:43:49.783364 | orchestrator | 2026-04-08 00:43:49 | INFO  | A [0] - dotfiles 2026-04-08 00:43:59.849080 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [0] - homer 2026-04-08 00:43:59.849845 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [0] - netdata 2026-04-08 00:43:59.849868 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [0] - openstackclient 2026-04-08 00:43:59.849876 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [0] - phpmyadmin 2026-04-08 00:43:59.849882 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [0] - common 2026-04-08 00:43:59.854208 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [1] -- loadbalancer 2026-04-08 00:43:59.854381 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [2] --- opensearch 2026-04-08 00:43:59.854429 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [2] --- mariadb-ng 2026-04-08 00:43:59.854499 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [3] ---- horizon 2026-04-08 00:43:59.854872 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [3] ---- keystone 2026-04-08 00:43:59.855410 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [4] ----- neutron 2026-04-08 00:43:59.855463 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [5] ------ wait-for-nova 2026-04-08 00:43:59.855601 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [6] ------- octavia 2026-04-08 00:43:59.857540 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [4] ----- barbican 2026-04-08 00:43:59.857719 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [4] ----- designate 2026-04-08 00:43:59.858237 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [4] ----- ironic 2026-04-08 00:43:59.858258 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [4] ----- placement 2026-04-08 00:43:59.858264 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [4] ----- magnum 2026-04-08 00:43:59.860244 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [1] -- openvswitch 2026-04-08 00:43:59.860281 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [2] --- ovn 2026-04-08 00:43:59.860730 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [1] -- memcached 2026-04-08 00:43:59.860995 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [1] -- redis 2026-04-08 00:43:59.861425 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [1] -- rabbitmq-ng 2026-04-08 00:43:59.861445 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [0] - kubernetes 2026-04-08 00:43:59.864697 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [1] -- kubeconfig 2026-04-08 00:43:59.864735 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [1] -- copy-kubeconfig 2026-04-08 00:43:59.864743 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [0] - ceph 2026-04-08 00:43:59.867544 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [1] -- ceph-pools 2026-04-08 00:43:59.867572 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [2] --- copy-ceph-keys 2026-04-08 00:43:59.867579 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [3] ---- cephclient 2026-04-08 00:43:59.867586 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-08 00:43:59.867593 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [4] ----- wait-for-keystone 2026-04-08 00:43:59.867864 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-08 00:43:59.868140 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [5] ------ glance 2026-04-08 00:43:59.868152 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [5] ------ cinder 2026-04-08 00:43:59.868439 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [5] ------ nova 2026-04-08 00:43:59.868450 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [4] ----- prometheus 2026-04-08 00:43:59.868678 | orchestrator | 2026-04-08 00:43:59 | INFO  | A [5] ------ grafana 2026-04-08 00:44:00.093486 | orchestrator | 2026-04-08 00:44:00 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-08 00:44:00.093585 | orchestrator | 2026-04-08 00:44:00 | INFO  | Tasks are running in the background 2026-04-08 00:44:02.092751 | orchestrator | 2026-04-08 00:44:02 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-08 00:44:04.353159 | orchestrator | 2026-04-08 00:44:04 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:04.355568 | orchestrator | 2026-04-08 00:44:04 | INFO  | Task d59be129-586c-40bd-b2cb-352cba9b231a is in state STARTED 2026-04-08 00:44:04.356601 | orchestrator | 2026-04-08 00:44:04 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:04.362970 | orchestrator | 2026-04-08 00:44:04 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:04.363664 | orchestrator | 2026-04-08 00:44:04 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:04.365284 | orchestrator | 2026-04-08 00:44:04 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state STARTED 2026-04-08 00:44:04.367065 | orchestrator | 2026-04-08 00:44:04 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:04.370261 | orchestrator | 2026-04-08 00:44:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:07.401426 | orchestrator | 2026-04-08 00:44:07 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:07.405580 | orchestrator | 2026-04-08 00:44:07 | INFO  | Task d59be129-586c-40bd-b2cb-352cba9b231a is in state STARTED 2026-04-08 00:44:07.405632 | orchestrator | 2026-04-08 00:44:07 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:07.405640 | orchestrator | 2026-04-08 00:44:07 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:07.405647 | orchestrator | 2026-04-08 00:44:07 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:07.405653 | orchestrator | 2026-04-08 00:44:07 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state STARTED 2026-04-08 00:44:07.405984 | orchestrator | 2026-04-08 00:44:07 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:07.408489 | orchestrator | 2026-04-08 00:44:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:10.602249 | orchestrator | 2026-04-08 00:44:10 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:10.604130 | orchestrator | 2026-04-08 00:44:10 | INFO  | Task d59be129-586c-40bd-b2cb-352cba9b231a is in state STARTED 2026-04-08 00:44:10.604457 | orchestrator | 2026-04-08 00:44:10 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:10.604961 | orchestrator | 2026-04-08 00:44:10 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:10.605573 | orchestrator | 2026-04-08 00:44:10 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:10.606115 | orchestrator | 2026-04-08 00:44:10 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state STARTED 2026-04-08 00:44:10.606686 | orchestrator | 2026-04-08 00:44:10 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:10.606720 | orchestrator | 2026-04-08 00:44:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:13.930218 | orchestrator | 2026-04-08 00:44:13 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:13.930395 | orchestrator | 2026-04-08 00:44:13 | INFO  | Task d59be129-586c-40bd-b2cb-352cba9b231a is in state STARTED 2026-04-08 00:44:13.930421 | orchestrator | 2026-04-08 00:44:13 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:13.930429 | orchestrator | 2026-04-08 00:44:13 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:13.930985 | orchestrator | 2026-04-08 00:44:13 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:13.931555 | orchestrator | 2026-04-08 00:44:13 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state STARTED 2026-04-08 00:44:13.931970 | orchestrator | 2026-04-08 00:44:13 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:13.931988 | orchestrator | 2026-04-08 00:44:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:17.094803 | orchestrator | 2026-04-08 00:44:16 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:17.094876 | orchestrator | 2026-04-08 00:44:16 | INFO  | Task d59be129-586c-40bd-b2cb-352cba9b231a is in state STARTED 2026-04-08 00:44:17.094882 | orchestrator | 2026-04-08 00:44:16 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:17.094904 | orchestrator | 2026-04-08 00:44:16 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:17.094909 | orchestrator | 2026-04-08 00:44:16 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:17.094914 | orchestrator | 2026-04-08 00:44:16 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state STARTED 2026-04-08 00:44:17.094919 | orchestrator | 2026-04-08 00:44:16 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:17.094925 | orchestrator | 2026-04-08 00:44:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:20.129254 | orchestrator | 2026-04-08 00:44:20 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:20.129575 | orchestrator | 2026-04-08 00:44:20 | INFO  | Task d59be129-586c-40bd-b2cb-352cba9b231a is in state STARTED 2026-04-08 00:44:20.135451 | orchestrator | 2026-04-08 00:44:20 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:20.135530 | orchestrator | 2026-04-08 00:44:20 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:20.136391 | orchestrator | 2026-04-08 00:44:20 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:20.142940 | orchestrator | 2026-04-08 00:44:20 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state STARTED 2026-04-08 00:44:20.143022 | orchestrator | 2026-04-08 00:44:20 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:20.143034 | orchestrator | 2026-04-08 00:44:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:23.452922 | orchestrator | 2026-04-08 00:44:23 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:23.453024 | orchestrator | 2026-04-08 00:44:23 | INFO  | Task d59be129-586c-40bd-b2cb-352cba9b231a is in state STARTED 2026-04-08 00:44:23.453039 | orchestrator | 2026-04-08 00:44:23 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:23.453052 | orchestrator | 2026-04-08 00:44:23 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:23.453064 | orchestrator | 2026-04-08 00:44:23 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:23.453075 | orchestrator | 2026-04-08 00:44:23 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state STARTED 2026-04-08 00:44:23.453087 | orchestrator | 2026-04-08 00:44:23 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:23.453099 | orchestrator | 2026-04-08 00:44:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:26.377340 | orchestrator | 2026-04-08 00:44:26.377419 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-08 00:44:26.377427 | orchestrator | 2026-04-08 00:44:26.377434 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-08 00:44:26.377446 | orchestrator | Wednesday 08 April 2026 00:44:11 +0000 (0:00:00.427) 0:00:00.427 ******* 2026-04-08 00:44:26.377468 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:44:26.377474 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:44:26.377479 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:44:26.377484 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:44:26.377489 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:44:26.377494 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:44:26.377499 | orchestrator | changed: [testbed-manager] 2026-04-08 00:44:26.377504 | orchestrator | 2026-04-08 00:44:26.377509 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-08 00:44:26.377514 | orchestrator | Wednesday 08 April 2026 00:44:15 +0000 (0:00:04.327) 0:00:04.754 ******* 2026-04-08 00:44:26.377520 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-08 00:44:26.377526 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-08 00:44:26.377531 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-08 00:44:26.377536 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-08 00:44:26.377540 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-08 00:44:26.377545 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-08 00:44:26.377550 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-08 00:44:26.377555 | orchestrator | 2026-04-08 00:44:26.377561 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-08 00:44:26.377567 | orchestrator | Wednesday 08 April 2026 00:44:18 +0000 (0:00:02.778) 0:00:07.533 ******* 2026-04-08 00:44:26.377575 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-08 00:44:17.077488', 'end': '2026-04-08 00:44:17.087366', 'delta': '0:00:00.009878', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-08 00:44:26.377586 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-08 00:44:16.529574', 'end': '2026-04-08 00:44:16.538946', 'delta': '0:00:00.009372', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-08 00:44:26.377592 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-08 00:44:17.073529', 'end': '2026-04-08 00:44:17.080450', 'delta': '0:00:00.006921', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-08 00:44:26.377628 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-08 00:44:16.920874', 'end': '2026-04-08 00:44:16.933058', 'delta': '0:00:00.012184', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-08 00:44:26.377634 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-08 00:44:16.565855', 'end': '2026-04-08 00:44:16.574412', 'delta': '0:00:00.008557', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-08 00:44:26.377640 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-08 00:44:18.152224', 'end': '2026-04-08 00:44:18.160305', 'delta': '0:00:00.008081', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-08 00:44:26.377645 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-08 00:44:18.037966', 'end': '2026-04-08 00:44:18.042486', 'delta': '0:00:00.004520', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-08 00:44:26.377650 | orchestrator | 2026-04-08 00:44:26.377656 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-08 00:44:26.377661 | orchestrator | Wednesday 08 April 2026 00:44:20 +0000 (0:00:02.035) 0:00:09.568 ******* 2026-04-08 00:44:26.377666 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-08 00:44:26.377671 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-08 00:44:26.377676 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-08 00:44:26.377681 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-08 00:44:26.377696 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-08 00:44:26.377704 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-08 00:44:26.377712 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-08 00:44:26.377720 | orchestrator | 2026-04-08 00:44:26.377728 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-08 00:44:26.377736 | orchestrator | Wednesday 08 April 2026 00:44:22 +0000 (0:00:01.689) 0:00:11.257 ******* 2026-04-08 00:44:26.377744 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-08 00:44:26.377796 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-08 00:44:26.377808 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-08 00:44:26.377817 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-08 00:44:26.377825 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-08 00:44:26.377833 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-08 00:44:26.377841 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-08 00:44:26.377849 | orchestrator | 2026-04-08 00:44:26.377877 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:44:26.377896 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:44:26.377911 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:44:26.377917 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:44:26.377924 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:44:26.377930 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:44:26.377939 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:44:26.377947 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:44:26.377955 | orchestrator | 2026-04-08 00:44:26.377963 | orchestrator | 2026-04-08 00:44:26.377971 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:44:26.377980 | orchestrator | Wednesday 08 April 2026 00:44:24 +0000 (0:00:02.286) 0:00:13.544 ******* 2026-04-08 00:44:26.377987 | orchestrator | =============================================================================== 2026-04-08 00:44:26.377995 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.33s 2026-04-08 00:44:26.378003 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.78s 2026-04-08 00:44:26.378010 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.29s 2026-04-08 00:44:26.378073 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.04s 2026-04-08 00:44:26.378083 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.69s 2026-04-08 00:44:26.378092 | orchestrator | 2026-04-08 00:44:26 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:26.378102 | orchestrator | 2026-04-08 00:44:26 | INFO  | Task d59be129-586c-40bd-b2cb-352cba9b231a is in state SUCCESS 2026-04-08 00:44:26.378111 | orchestrator | 2026-04-08 00:44:26 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:26.389073 | orchestrator | 2026-04-08 00:44:26 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:26.391236 | orchestrator | 2026-04-08 00:44:26 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:44:26.402273 | orchestrator | 2026-04-08 00:44:26 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:26.417638 | orchestrator | 2026-04-08 00:44:26 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state STARTED 2026-04-08 00:44:26.417745 | orchestrator | 2026-04-08 00:44:26 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:26.417824 | orchestrator | 2026-04-08 00:44:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:29.751499 | orchestrator | 2026-04-08 00:44:29 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:29.752330 | orchestrator | 2026-04-08 00:44:29 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:29.755805 | orchestrator | 2026-04-08 00:44:29 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:29.771735 | orchestrator | 2026-04-08 00:44:29 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:44:29.771842 | orchestrator | 2026-04-08 00:44:29 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:29.771851 | orchestrator | 2026-04-08 00:44:29 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state STARTED 2026-04-08 00:44:29.771859 | orchestrator | 2026-04-08 00:44:29 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:29.771867 | orchestrator | 2026-04-08 00:44:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:32.805518 | orchestrator | 2026-04-08 00:44:32 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:32.808388 | orchestrator | 2026-04-08 00:44:32 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:32.813325 | orchestrator | 2026-04-08 00:44:32 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:32.813845 | orchestrator | 2026-04-08 00:44:32 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:44:32.816037 | orchestrator | 2026-04-08 00:44:32 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:32.816942 | orchestrator | 2026-04-08 00:44:32 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state STARTED 2026-04-08 00:44:32.819520 | orchestrator | 2026-04-08 00:44:32 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:32.819565 | orchestrator | 2026-04-08 00:44:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:35.847116 | orchestrator | 2026-04-08 00:44:35 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:35.847351 | orchestrator | 2026-04-08 00:44:35 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:35.847775 | orchestrator | 2026-04-08 00:44:35 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:35.849225 | orchestrator | 2026-04-08 00:44:35 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:44:35.850107 | orchestrator | 2026-04-08 00:44:35 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:35.851696 | orchestrator | 2026-04-08 00:44:35 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state STARTED 2026-04-08 00:44:35.852349 | orchestrator | 2026-04-08 00:44:35 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:35.852393 | orchestrator | 2026-04-08 00:44:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:38.907019 | orchestrator | 2026-04-08 00:44:38 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:38.910629 | orchestrator | 2026-04-08 00:44:38 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:38.910680 | orchestrator | 2026-04-08 00:44:38 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:38.912618 | orchestrator | 2026-04-08 00:44:38 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:44:38.912655 | orchestrator | 2026-04-08 00:44:38 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:38.912665 | orchestrator | 2026-04-08 00:44:38 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state STARTED 2026-04-08 00:44:38.914098 | orchestrator | 2026-04-08 00:44:38 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:38.914121 | orchestrator | 2026-04-08 00:44:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:41.964908 | orchestrator | 2026-04-08 00:44:41 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:41.967732 | orchestrator | 2026-04-08 00:44:41 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:41.972868 | orchestrator | 2026-04-08 00:44:41 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:41.977097 | orchestrator | 2026-04-08 00:44:41 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:44:41.980292 | orchestrator | 2026-04-08 00:44:41 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:41.982581 | orchestrator | 2026-04-08 00:44:41 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state STARTED 2026-04-08 00:44:41.983884 | orchestrator | 2026-04-08 00:44:41 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:41.983933 | orchestrator | 2026-04-08 00:44:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:45.062871 | orchestrator | 2026-04-08 00:44:45 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:45.066211 | orchestrator | 2026-04-08 00:44:45 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:45.070591 | orchestrator | 2026-04-08 00:44:45 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:45.075777 | orchestrator | 2026-04-08 00:44:45 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:44:45.083767 | orchestrator | 2026-04-08 00:44:45 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:45.085209 | orchestrator | 2026-04-08 00:44:45 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state STARTED 2026-04-08 00:44:45.086857 | orchestrator | 2026-04-08 00:44:45 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:45.087512 | orchestrator | 2026-04-08 00:44:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:48.154269 | orchestrator | 2026-04-08 00:44:48 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:48.154431 | orchestrator | 2026-04-08 00:44:48 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:48.155306 | orchestrator | 2026-04-08 00:44:48 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:48.155774 | orchestrator | 2026-04-08 00:44:48 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:44:48.156825 | orchestrator | 2026-04-08 00:44:48 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:48.159435 | orchestrator | 2026-04-08 00:44:48 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state STARTED 2026-04-08 00:44:48.161408 | orchestrator | 2026-04-08 00:44:48 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:48.161451 | orchestrator | 2026-04-08 00:44:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:51.204799 | orchestrator | 2026-04-08 00:44:51 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:51.218900 | orchestrator | 2026-04-08 00:44:51 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:51.282816 | orchestrator | 2026-04-08 00:44:51 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:51.282884 | orchestrator | 2026-04-08 00:44:51 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:44:51.282891 | orchestrator | 2026-04-08 00:44:51 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:51.282896 | orchestrator | 2026-04-08 00:44:51 | INFO  | Task 6582aba5-165b-48c3-932f-1354fe811694 is in state SUCCESS 2026-04-08 00:44:51.282900 | orchestrator | 2026-04-08 00:44:51 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:51.282905 | orchestrator | 2026-04-08 00:44:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:54.468190 | orchestrator | 2026-04-08 00:44:54 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:54.468286 | orchestrator | 2026-04-08 00:44:54 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:54.468294 | orchestrator | 2026-04-08 00:44:54 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:54.468301 | orchestrator | 2026-04-08 00:44:54 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:44:54.468307 | orchestrator | 2026-04-08 00:44:54 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:54.468313 | orchestrator | 2026-04-08 00:44:54 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:54.468319 | orchestrator | 2026-04-08 00:44:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:57.401392 | orchestrator | 2026-04-08 00:44:57 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:44:57.401495 | orchestrator | 2026-04-08 00:44:57 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:44:57.401845 | orchestrator | 2026-04-08 00:44:57 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:44:57.401861 | orchestrator | 2026-04-08 00:44:57 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:44:57.401870 | orchestrator | 2026-04-08 00:44:57 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:44:57.401879 | orchestrator | 2026-04-08 00:44:57 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:44:57.401888 | orchestrator | 2026-04-08 00:44:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:00.440270 | orchestrator | 2026-04-08 00:45:00 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:00.441037 | orchestrator | 2026-04-08 00:45:00 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state STARTED 2026-04-08 00:45:00.441866 | orchestrator | 2026-04-08 00:45:00 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:00.443943 | orchestrator | 2026-04-08 00:45:00 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:45:00.444450 | orchestrator | 2026-04-08 00:45:00 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:00.446495 | orchestrator | 2026-04-08 00:45:00 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:00.446534 | orchestrator | 2026-04-08 00:45:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:03.504000 | orchestrator | 2026-04-08 00:45:03 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:03.504094 | orchestrator | 2026-04-08 00:45:03 | INFO  | Task d16106ca-fb42-49ac-87aa-dd6222e0ef10 is in state SUCCESS 2026-04-08 00:45:03.506870 | orchestrator | 2026-04-08 00:45:03 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:03.508225 | orchestrator | 2026-04-08 00:45:03 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:45:03.509102 | orchestrator | 2026-04-08 00:45:03 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:03.510930 | orchestrator | 2026-04-08 00:45:03 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:03.510990 | orchestrator | 2026-04-08 00:45:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:06.554806 | orchestrator | 2026-04-08 00:45:06 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:06.555431 | orchestrator | 2026-04-08 00:45:06 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:06.556933 | orchestrator | 2026-04-08 00:45:06 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:45:06.558469 | orchestrator | 2026-04-08 00:45:06 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:06.558960 | orchestrator | 2026-04-08 00:45:06 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:06.558986 | orchestrator | 2026-04-08 00:45:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:09.619020 | orchestrator | 2026-04-08 00:45:09 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:09.619972 | orchestrator | 2026-04-08 00:45:09 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:09.621095 | orchestrator | 2026-04-08 00:45:09 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:45:09.624191 | orchestrator | 2026-04-08 00:45:09 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:09.625838 | orchestrator | 2026-04-08 00:45:09 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:09.625899 | orchestrator | 2026-04-08 00:45:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:12.685274 | orchestrator | 2026-04-08 00:45:12 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:12.687512 | orchestrator | 2026-04-08 00:45:12 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:12.688168 | orchestrator | 2026-04-08 00:45:12 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:45:12.692147 | orchestrator | 2026-04-08 00:45:12 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:12.693080 | orchestrator | 2026-04-08 00:45:12 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:12.693830 | orchestrator | 2026-04-08 00:45:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:15.810195 | orchestrator | 2026-04-08 00:45:15 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:15.810262 | orchestrator | 2026-04-08 00:45:15 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:15.810268 | orchestrator | 2026-04-08 00:45:15 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:45:15.814601 | orchestrator | 2026-04-08 00:45:15 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:15.817699 | orchestrator | 2026-04-08 00:45:15 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:15.817867 | orchestrator | 2026-04-08 00:45:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:18.902657 | orchestrator | 2026-04-08 00:45:18 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:18.907127 | orchestrator | 2026-04-08 00:45:18 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:18.908701 | orchestrator | 2026-04-08 00:45:18 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:45:18.912973 | orchestrator | 2026-04-08 00:45:18 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:18.914599 | orchestrator | 2026-04-08 00:45:18 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:18.916903 | orchestrator | 2026-04-08 00:45:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:21.959020 | orchestrator | 2026-04-08 00:45:21 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:21.959628 | orchestrator | 2026-04-08 00:45:21 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:21.961011 | orchestrator | 2026-04-08 00:45:21 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:45:21.961330 | orchestrator | 2026-04-08 00:45:21 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:21.962147 | orchestrator | 2026-04-08 00:45:21 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:21.962333 | orchestrator | 2026-04-08 00:45:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:25.051299 | orchestrator | 2026-04-08 00:45:25 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:25.053933 | orchestrator | 2026-04-08 00:45:25 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:25.057280 | orchestrator | 2026-04-08 00:45:25 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:45:25.058436 | orchestrator | 2026-04-08 00:45:25 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:25.059561 | orchestrator | 2026-04-08 00:45:25 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:25.059603 | orchestrator | 2026-04-08 00:45:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:28.147773 | orchestrator | 2026-04-08 00:45:28 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:28.148469 | orchestrator | 2026-04-08 00:45:28 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:28.150628 | orchestrator | 2026-04-08 00:45:28 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:45:28.152885 | orchestrator | 2026-04-08 00:45:28 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:28.153248 | orchestrator | 2026-04-08 00:45:28 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:28.153267 | orchestrator | 2026-04-08 00:45:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:31.210456 | orchestrator | 2026-04-08 00:45:31 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:31.212046 | orchestrator | 2026-04-08 00:45:31 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:31.212841 | orchestrator | 2026-04-08 00:45:31 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:45:31.215725 | orchestrator | 2026-04-08 00:45:31 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:31.218276 | orchestrator | 2026-04-08 00:45:31 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:31.218335 | orchestrator | 2026-04-08 00:45:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:34.266074 | orchestrator | 2026-04-08 00:45:34 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:34.269856 | orchestrator | 2026-04-08 00:45:34 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:34.275479 | orchestrator | 2026-04-08 00:45:34 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:45:34.281859 | orchestrator | 2026-04-08 00:45:34 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:34.281941 | orchestrator | 2026-04-08 00:45:34 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:34.281950 | orchestrator | 2026-04-08 00:45:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:37.334159 | orchestrator | 2026-04-08 00:45:37 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:37.335836 | orchestrator | 2026-04-08 00:45:37 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:37.335904 | orchestrator | 2026-04-08 00:45:37 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:45:37.336373 | orchestrator | 2026-04-08 00:45:37 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:37.337562 | orchestrator | 2026-04-08 00:45:37 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:37.337591 | orchestrator | 2026-04-08 00:45:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:40.382765 | orchestrator | 2026-04-08 00:45:40 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:40.383386 | orchestrator | 2026-04-08 00:45:40 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:40.383648 | orchestrator | 2026-04-08 00:45:40 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:45:40.385620 | orchestrator | 2026-04-08 00:45:40 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:40.385667 | orchestrator | 2026-04-08 00:45:40 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:40.385676 | orchestrator | 2026-04-08 00:45:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:43.482317 | orchestrator | 2026-04-08 00:45:43 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:43.482425 | orchestrator | 2026-04-08 00:45:43 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:43.488950 | orchestrator | 2026-04-08 00:45:43 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state STARTED 2026-04-08 00:45:43.489023 | orchestrator | 2026-04-08 00:45:43 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:43.492100 | orchestrator | 2026-04-08 00:45:43 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:43.492151 | orchestrator | 2026-04-08 00:45:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:46.530412 | orchestrator | 2026-04-08 00:45:46 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:46.532168 | orchestrator | 2026-04-08 00:45:46 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:46.532234 | orchestrator | 2026-04-08 00:45:46 | INFO  | Task 913e31d2-a925-47e2-8d3a-4d38648db881 is in state SUCCESS 2026-04-08 00:45:46.533913 | orchestrator | 2026-04-08 00:45:46.533973 | orchestrator | 2026-04-08 00:45:46.533985 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-08 00:45:46.533994 | orchestrator | 2026-04-08 00:45:46.534001 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-08 00:45:46.534008 | orchestrator | Wednesday 08 April 2026 00:44:12 +0000 (0:00:01.234) 0:00:01.234 ******* 2026-04-08 00:45:46.534080 | orchestrator | ok: [testbed-manager] => { 2026-04-08 00:45:46.534091 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-08 00:45:46.534099 | orchestrator | } 2026-04-08 00:45:46.534106 | orchestrator | 2026-04-08 00:45:46.534112 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-08 00:45:46.534119 | orchestrator | Wednesday 08 April 2026 00:44:12 +0000 (0:00:00.337) 0:00:01.572 ******* 2026-04-08 00:45:46.534126 | orchestrator | ok: [testbed-manager] 2026-04-08 00:45:46.534131 | orchestrator | 2026-04-08 00:45:46.534135 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-08 00:45:46.534139 | orchestrator | Wednesday 08 April 2026 00:44:15 +0000 (0:00:02.503) 0:00:04.075 ******* 2026-04-08 00:45:46.534144 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-08 00:45:46.534148 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-08 00:45:46.534152 | orchestrator | 2026-04-08 00:45:46.534156 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-08 00:45:46.534160 | orchestrator | Wednesday 08 April 2026 00:44:17 +0000 (0:00:02.247) 0:00:06.323 ******* 2026-04-08 00:45:46.534164 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:46.534167 | orchestrator | 2026-04-08 00:45:46.534171 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-08 00:45:46.534175 | orchestrator | Wednesday 08 April 2026 00:44:19 +0000 (0:00:02.481) 0:00:08.805 ******* 2026-04-08 00:45:46.534179 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:46.534182 | orchestrator | 2026-04-08 00:45:46.534186 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-08 00:45:46.534190 | orchestrator | Wednesday 08 April 2026 00:44:21 +0000 (0:00:02.177) 0:00:10.982 ******* 2026-04-08 00:45:46.534194 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-08 00:45:46.534197 | orchestrator | ok: [testbed-manager] 2026-04-08 00:45:46.534201 | orchestrator | 2026-04-08 00:45:46.534205 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-08 00:45:46.534209 | orchestrator | Wednesday 08 April 2026 00:44:47 +0000 (0:00:25.443) 0:00:36.425 ******* 2026-04-08 00:45:46.534212 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:46.534232 | orchestrator | 2026-04-08 00:45:46.534236 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:45:46.534240 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:45:46.534246 | orchestrator | 2026-04-08 00:45:46.534250 | orchestrator | 2026-04-08 00:45:46.534254 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:45:46.534257 | orchestrator | Wednesday 08 April 2026 00:44:49 +0000 (0:00:02.477) 0:00:38.903 ******* 2026-04-08 00:45:46.534261 | orchestrator | =============================================================================== 2026-04-08 00:45:46.534265 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.44s 2026-04-08 00:45:46.534269 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.50s 2026-04-08 00:45:46.534273 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.48s 2026-04-08 00:45:46.534277 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.48s 2026-04-08 00:45:46.534281 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.25s 2026-04-08 00:45:46.534285 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.18s 2026-04-08 00:45:46.534289 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.34s 2026-04-08 00:45:46.534292 | orchestrator | 2026-04-08 00:45:46.534296 | orchestrator | 2026-04-08 00:45:46.534300 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-08 00:45:46.534303 | orchestrator | 2026-04-08 00:45:46.534307 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-08 00:45:46.534311 | orchestrator | Wednesday 08 April 2026 00:44:10 +0000 (0:00:00.503) 0:00:00.503 ******* 2026-04-08 00:45:46.534315 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-08 00:45:46.534320 | orchestrator | 2026-04-08 00:45:46.534333 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-08 00:45:46.534337 | orchestrator | Wednesday 08 April 2026 00:44:10 +0000 (0:00:00.442) 0:00:00.945 ******* 2026-04-08 00:45:46.534341 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-08 00:45:46.534345 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-08 00:45:46.534349 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-08 00:45:46.534353 | orchestrator | 2026-04-08 00:45:46.534356 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-08 00:45:46.534360 | orchestrator | Wednesday 08 April 2026 00:44:12 +0000 (0:00:02.361) 0:00:03.306 ******* 2026-04-08 00:45:46.534364 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:46.534368 | orchestrator | 2026-04-08 00:45:46.534371 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-08 00:45:46.534375 | orchestrator | Wednesday 08 April 2026 00:44:15 +0000 (0:00:02.336) 0:00:05.644 ******* 2026-04-08 00:45:46.534392 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-08 00:45:46.534396 | orchestrator | ok: [testbed-manager] 2026-04-08 00:45:46.534400 | orchestrator | 2026-04-08 00:45:46.534404 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-08 00:45:46.534408 | orchestrator | Wednesday 08 April 2026 00:44:51 +0000 (0:00:36.498) 0:00:42.143 ******* 2026-04-08 00:45:46.534411 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:46.534415 | orchestrator | 2026-04-08 00:45:46.534419 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-08 00:45:46.534423 | orchestrator | Wednesday 08 April 2026 00:44:52 +0000 (0:00:01.087) 0:00:43.231 ******* 2026-04-08 00:45:46.534427 | orchestrator | ok: [testbed-manager] 2026-04-08 00:45:46.534431 | orchestrator | 2026-04-08 00:45:46.534438 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-08 00:45:46.534442 | orchestrator | Wednesday 08 April 2026 00:44:55 +0000 (0:00:02.721) 0:00:45.952 ******* 2026-04-08 00:45:46.534446 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:46.534450 | orchestrator | 2026-04-08 00:45:46.534453 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-08 00:45:46.534457 | orchestrator | Wednesday 08 April 2026 00:44:58 +0000 (0:00:03.188) 0:00:49.140 ******* 2026-04-08 00:45:46.534461 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:46.534465 | orchestrator | 2026-04-08 00:45:46.534469 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-08 00:45:46.534472 | orchestrator | Wednesday 08 April 2026 00:44:59 +0000 (0:00:00.788) 0:00:49.928 ******* 2026-04-08 00:45:46.534476 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:46.534480 | orchestrator | 2026-04-08 00:45:46.534484 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-08 00:45:46.534487 | orchestrator | Wednesday 08 April 2026 00:45:00 +0000 (0:00:01.147) 0:00:51.076 ******* 2026-04-08 00:45:46.534491 | orchestrator | ok: [testbed-manager] 2026-04-08 00:45:46.534504 | orchestrator | 2026-04-08 00:45:46.534508 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:45:46.534512 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:45:46.534516 | orchestrator | 2026-04-08 00:45:46.534520 | orchestrator | 2026-04-08 00:45:46.534523 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:45:46.534527 | orchestrator | Wednesday 08 April 2026 00:45:01 +0000 (0:00:00.565) 0:00:51.649 ******* 2026-04-08 00:45:46.534531 | orchestrator | =============================================================================== 2026-04-08 00:45:46.534535 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.50s 2026-04-08 00:45:46.534538 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.19s 2026-04-08 00:45:46.534542 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 2.72s 2026-04-08 00:45:46.534546 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.36s 2026-04-08 00:45:46.534550 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.34s 2026-04-08 00:45:46.534553 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.15s 2026-04-08 00:45:46.534557 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.09s 2026-04-08 00:45:46.534563 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.79s 2026-04-08 00:45:46.534569 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.57s 2026-04-08 00:45:46.534574 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.44s 2026-04-08 00:45:46.534580 | orchestrator | 2026-04-08 00:45:46.534586 | orchestrator | 2026-04-08 00:45:46.534591 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-08 00:45:46.534597 | orchestrator | 2026-04-08 00:45:46.534603 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-08 00:45:46.534652 | orchestrator | Wednesday 08 April 2026 00:44:32 +0000 (0:00:00.524) 0:00:00.524 ******* 2026-04-08 00:45:46.534661 | orchestrator | ok: [testbed-manager] 2026-04-08 00:45:46.534667 | orchestrator | 2026-04-08 00:45:46.534671 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-08 00:45:46.534676 | orchestrator | Wednesday 08 April 2026 00:44:33 +0000 (0:00:01.380) 0:00:01.905 ******* 2026-04-08 00:45:46.534680 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-08 00:45:46.534685 | orchestrator | 2026-04-08 00:45:46.534689 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-08 00:45:46.534693 | orchestrator | Wednesday 08 April 2026 00:44:35 +0000 (0:00:01.092) 0:00:02.997 ******* 2026-04-08 00:45:46.534703 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:46.534707 | orchestrator | 2026-04-08 00:45:46.534717 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-08 00:45:46.534721 | orchestrator | Wednesday 08 April 2026 00:44:36 +0000 (0:00:01.232) 0:00:04.229 ******* 2026-04-08 00:45:46.534726 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-08 00:45:46.534730 | orchestrator | ok: [testbed-manager] 2026-04-08 00:45:46.534735 | orchestrator | 2026-04-08 00:45:46.534745 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-08 00:45:46.534749 | orchestrator | Wednesday 08 April 2026 00:45:35 +0000 (0:00:59.652) 0:01:03.882 ******* 2026-04-08 00:45:46.534754 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:46.534758 | orchestrator | 2026-04-08 00:45:46.534763 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:45:46.534767 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:45:46.534772 | orchestrator | 2026-04-08 00:45:46.534776 | orchestrator | 2026-04-08 00:45:46.534780 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:45:46.534790 | orchestrator | Wednesday 08 April 2026 00:45:43 +0000 (0:00:07.332) 0:01:11.215 ******* 2026-04-08 00:45:46.534795 | orchestrator | =============================================================================== 2026-04-08 00:45:46.534820 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 59.65s 2026-04-08 00:45:46.534824 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 7.33s 2026-04-08 00:45:46.534829 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.38s 2026-04-08 00:45:46.534832 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.23s 2026-04-08 00:45:46.534836 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.09s 2026-04-08 00:45:46.534840 | orchestrator | 2026-04-08 00:45:46 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:46.534901 | orchestrator | 2026-04-08 00:45:46 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:46.535349 | orchestrator | 2026-04-08 00:45:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:49.575861 | orchestrator | 2026-04-08 00:45:49 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:49.578908 | orchestrator | 2026-04-08 00:45:49 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:49.578986 | orchestrator | 2026-04-08 00:45:49 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:49.579629 | orchestrator | 2026-04-08 00:45:49 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state STARTED 2026-04-08 00:45:49.579649 | orchestrator | 2026-04-08 00:45:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:52.645504 | orchestrator | 2026-04-08 00:45:52 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:52.645589 | orchestrator | 2026-04-08 00:45:52 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:52.647017 | orchestrator | 2026-04-08 00:45:52 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:52.647444 | orchestrator | 2026-04-08 00:45:52 | INFO  | Task 533f96c4-e1a9-47b3-8d34-f3cc4ed0b415 is in state SUCCESS 2026-04-08 00:45:52.649604 | orchestrator | 2026-04-08 00:45:52.649639 | orchestrator | 2026-04-08 00:45:52.649647 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:45:52.649655 | orchestrator | 2026-04-08 00:45:52.649662 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:45:52.649691 | orchestrator | Wednesday 08 April 2026 00:44:11 +0000 (0:00:01.097) 0:00:01.097 ******* 2026-04-08 00:45:52.649698 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-08 00:45:52.649705 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-08 00:45:52.649711 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-08 00:45:52.649716 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-08 00:45:52.649722 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-08 00:45:52.649728 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-08 00:45:52.649734 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-08 00:45:52.649741 | orchestrator | 2026-04-08 00:45:52.649747 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-08 00:45:52.649753 | orchestrator | 2026-04-08 00:45:52.649759 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-08 00:45:52.649765 | orchestrator | Wednesday 08 April 2026 00:44:13 +0000 (0:00:01.560) 0:00:02.658 ******* 2026-04-08 00:45:52.649792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:45:52.649801 | orchestrator | 2026-04-08 00:45:52.649910 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-08 00:45:52.649917 | orchestrator | Wednesday 08 April 2026 00:44:14 +0000 (0:00:01.581) 0:00:04.240 ******* 2026-04-08 00:45:52.649924 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:45:52.649932 | orchestrator | ok: [testbed-manager] 2026-04-08 00:45:52.649938 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:45:52.649944 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:45:52.649950 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:45:52.649957 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:45:52.649963 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:45:52.649970 | orchestrator | 2026-04-08 00:45:52.649977 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-08 00:45:52.649983 | orchestrator | Wednesday 08 April 2026 00:44:18 +0000 (0:00:03.463) 0:00:07.704 ******* 2026-04-08 00:45:52.649989 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:45:52.649996 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:45:52.650002 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:45:52.650009 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:45:52.650046 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:45:52.650054 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:45:52.650062 | orchestrator | ok: [testbed-manager] 2026-04-08 00:45:52.650069 | orchestrator | 2026-04-08 00:45:52.650075 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-08 00:45:52.650082 | orchestrator | Wednesday 08 April 2026 00:44:22 +0000 (0:00:04.355) 0:00:12.060 ******* 2026-04-08 00:45:52.650089 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:45:52.650095 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:45:52.650102 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:52.650109 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:45:52.650115 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:45:52.650122 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:45:52.650129 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:45:52.650135 | orchestrator | 2026-04-08 00:45:52.650141 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-08 00:45:52.650149 | orchestrator | Wednesday 08 April 2026 00:44:26 +0000 (0:00:03.279) 0:00:15.340 ******* 2026-04-08 00:45:52.650155 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:45:52.650162 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:45:52.650169 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:45:52.650177 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:45:52.650192 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:45:52.650199 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:45:52.650206 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:52.650212 | orchestrator | 2026-04-08 00:45:52.650219 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-08 00:45:52.650263 | orchestrator | Wednesday 08 April 2026 00:44:36 +0000 (0:00:10.150) 0:00:25.490 ******* 2026-04-08 00:45:52.650271 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:45:52.650278 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:45:52.650285 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:45:52.650291 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:45:52.650297 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:45:52.650304 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:45:52.650310 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:52.650316 | orchestrator | 2026-04-08 00:45:52.650323 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-08 00:45:52.650330 | orchestrator | Wednesday 08 April 2026 00:45:20 +0000 (0:00:44.026) 0:01:09.517 ******* 2026-04-08 00:45:52.650338 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:45:52.650346 | orchestrator | 2026-04-08 00:45:52.650352 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-08 00:45:52.650358 | orchestrator | Wednesday 08 April 2026 00:45:22 +0000 (0:00:01.996) 0:01:11.513 ******* 2026-04-08 00:45:52.650365 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-08 00:45:52.650372 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-08 00:45:52.650378 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-08 00:45:52.650384 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-08 00:45:52.650404 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-08 00:45:52.650410 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-08 00:45:52.650416 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-08 00:45:52.650424 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-08 00:45:52.650432 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-08 00:45:52.650438 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-08 00:45:52.650445 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-08 00:45:52.650451 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-08 00:45:52.650458 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-08 00:45:52.650465 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-08 00:45:52.650471 | orchestrator | 2026-04-08 00:45:52.650477 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-08 00:45:52.650485 | orchestrator | Wednesday 08 April 2026 00:45:27 +0000 (0:00:05.091) 0:01:16.605 ******* 2026-04-08 00:45:52.650492 | orchestrator | ok: [testbed-manager] 2026-04-08 00:45:52.650499 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:45:52.650505 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:45:52.650511 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:45:52.650517 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:45:52.650524 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:45:52.650531 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:45:52.650537 | orchestrator | 2026-04-08 00:45:52.650544 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-08 00:45:52.650550 | orchestrator | Wednesday 08 April 2026 00:45:29 +0000 (0:00:02.177) 0:01:18.782 ******* 2026-04-08 00:45:52.650557 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:45:52.650563 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:52.650570 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:45:52.650577 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:45:52.650589 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:45:52.650598 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:45:52.650608 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:45:52.650615 | orchestrator | 2026-04-08 00:45:52.650621 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-08 00:45:52.650628 | orchestrator | Wednesday 08 April 2026 00:45:31 +0000 (0:00:01.713) 0:01:20.496 ******* 2026-04-08 00:45:52.650635 | orchestrator | ok: [testbed-manager] 2026-04-08 00:45:52.650641 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:45:52.650659 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:45:52.650665 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:45:52.650671 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:45:52.650678 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:45:52.650684 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:45:52.650691 | orchestrator | 2026-04-08 00:45:52.650697 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-08 00:45:52.650702 | orchestrator | Wednesday 08 April 2026 00:45:33 +0000 (0:00:02.008) 0:01:22.504 ******* 2026-04-08 00:45:52.650708 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:45:52.650714 | orchestrator | ok: [testbed-manager] 2026-04-08 00:45:52.650719 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:45:52.650726 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:45:52.650733 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:45:52.650739 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:45:52.650745 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:45:52.650754 | orchestrator | 2026-04-08 00:45:52.650760 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-08 00:45:52.650767 | orchestrator | Wednesday 08 April 2026 00:45:35 +0000 (0:00:02.141) 0:01:24.645 ******* 2026-04-08 00:45:52.650773 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-08 00:45:52.650782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:45:52.650790 | orchestrator | 2026-04-08 00:45:52.650797 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-08 00:45:52.650827 | orchestrator | Wednesday 08 April 2026 00:45:37 +0000 (0:00:01.833) 0:01:26.478 ******* 2026-04-08 00:45:52.650834 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:52.650840 | orchestrator | 2026-04-08 00:45:52.650846 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-08 00:45:52.650852 | orchestrator | Wednesday 08 April 2026 00:45:38 +0000 (0:00:01.854) 0:01:28.333 ******* 2026-04-08 00:45:52.650859 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:45:52.650866 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:45:52.650872 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:45:52.650879 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:45:52.650885 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:45:52.650890 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:45:52.650897 | orchestrator | changed: [testbed-manager] 2026-04-08 00:45:52.650903 | orchestrator | 2026-04-08 00:45:52.650909 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:45:52.650916 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:45:52.650924 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:45:52.650930 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:45:52.650937 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:45:52.650955 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:45:52.650962 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:45:52.650968 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:45:52.650975 | orchestrator | 2026-04-08 00:45:52.650981 | orchestrator | 2026-04-08 00:45:52.650987 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:45:52.650994 | orchestrator | Wednesday 08 April 2026 00:45:50 +0000 (0:00:11.281) 0:01:39.614 ******* 2026-04-08 00:45:52.650999 | orchestrator | =============================================================================== 2026-04-08 00:45:52.651006 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 44.03s 2026-04-08 00:45:52.651013 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.28s 2026-04-08 00:45:52.651019 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.15s 2026-04-08 00:45:52.651025 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.09s 2026-04-08 00:45:52.651031 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.36s 2026-04-08 00:45:52.651037 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.46s 2026-04-08 00:45:52.651043 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.28s 2026-04-08 00:45:52.651052 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.18s 2026-04-08 00:45:52.651058 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.14s 2026-04-08 00:45:52.651068 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.01s 2026-04-08 00:45:52.651074 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.00s 2026-04-08 00:45:52.651081 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.85s 2026-04-08 00:45:52.651087 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.83s 2026-04-08 00:45:52.651093 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.71s 2026-04-08 00:45:52.651100 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.58s 2026-04-08 00:45:52.651106 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.56s 2026-04-08 00:45:52.651113 | orchestrator | 2026-04-08 00:45:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:55.705339 | orchestrator | 2026-04-08 00:45:55 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:55.705397 | orchestrator | 2026-04-08 00:45:55 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:55.705403 | orchestrator | 2026-04-08 00:45:55 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:55.705407 | orchestrator | 2026-04-08 00:45:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:58.758591 | orchestrator | 2026-04-08 00:45:58 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:45:58.759033 | orchestrator | 2026-04-08 00:45:58 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:45:58.766235 | orchestrator | 2026-04-08 00:45:58 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:45:58.766291 | orchestrator | 2026-04-08 00:45:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:01.828947 | orchestrator | 2026-04-08 00:46:01 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:01.832239 | orchestrator | 2026-04-08 00:46:01 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:01.840418 | orchestrator | 2026-04-08 00:46:01 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:46:01.840464 | orchestrator | 2026-04-08 00:46:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:04.899428 | orchestrator | 2026-04-08 00:46:04 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:04.902204 | orchestrator | 2026-04-08 00:46:04 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:04.903143 | orchestrator | 2026-04-08 00:46:04 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:46:04.903172 | orchestrator | 2026-04-08 00:46:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:07.949365 | orchestrator | 2026-04-08 00:46:07 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:07.951664 | orchestrator | 2026-04-08 00:46:07 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:07.953344 | orchestrator | 2026-04-08 00:46:07 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:46:07.953994 | orchestrator | 2026-04-08 00:46:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:11.006009 | orchestrator | 2026-04-08 00:46:11 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:11.008386 | orchestrator | 2026-04-08 00:46:11 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:11.011328 | orchestrator | 2026-04-08 00:46:11 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:46:11.011375 | orchestrator | 2026-04-08 00:46:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:14.043285 | orchestrator | 2026-04-08 00:46:14 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:14.043354 | orchestrator | 2026-04-08 00:46:14 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:14.043471 | orchestrator | 2026-04-08 00:46:14 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:46:14.043479 | orchestrator | 2026-04-08 00:46:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:17.085566 | orchestrator | 2026-04-08 00:46:17 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:17.085648 | orchestrator | 2026-04-08 00:46:17 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:17.087141 | orchestrator | 2026-04-08 00:46:17 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:46:17.087200 | orchestrator | 2026-04-08 00:46:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:20.133443 | orchestrator | 2026-04-08 00:46:20 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:20.136953 | orchestrator | 2026-04-08 00:46:20 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:20.139871 | orchestrator | 2026-04-08 00:46:20 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:46:20.140108 | orchestrator | 2026-04-08 00:46:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:23.173080 | orchestrator | 2026-04-08 00:46:23 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:23.173589 | orchestrator | 2026-04-08 00:46:23 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:23.176615 | orchestrator | 2026-04-08 00:46:23 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state STARTED 2026-04-08 00:46:23.176648 | orchestrator | 2026-04-08 00:46:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:26.202271 | orchestrator | 2026-04-08 00:46:26 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:46:26.202340 | orchestrator | 2026-04-08 00:46:26 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:26.203901 | orchestrator | 2026-04-08 00:46:26 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:26.203960 | orchestrator | 2026-04-08 00:46:26 | INFO  | Task 9524dc0a-a5c9-4eee-83b1-88af8af1aef8 is in state STARTED 2026-04-08 00:46:26.205987 | orchestrator | 2026-04-08 00:46:26 | INFO  | Task 7d4b9747-20ef-4755-89c7-fef128394027 is in state STARTED 2026-04-08 00:46:26.208820 | orchestrator | 2026-04-08 00:46:26 | INFO  | Task 6e097b6d-458a-4049-80e1-d6143b88997b is in state SUCCESS 2026-04-08 00:46:26.210486 | orchestrator | 2026-04-08 00:46:26.210551 | orchestrator | 2026-04-08 00:46:26.210560 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-08 00:46:26.210568 | orchestrator | 2026-04-08 00:46:26.210575 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-08 00:46:26.210582 | orchestrator | Wednesday 08 April 2026 00:44:04 +0000 (0:00:00.482) 0:00:00.482 ******* 2026-04-08 00:46:26.210590 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:46:26.210598 | orchestrator | 2026-04-08 00:46:26.210604 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-08 00:46:26.210610 | orchestrator | Wednesday 08 April 2026 00:44:06 +0000 (0:00:01.384) 0:00:01.867 ******* 2026-04-08 00:46:26.210660 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-08 00:46:26.210668 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-08 00:46:26.210685 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-08 00:46:26.210691 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-08 00:46:26.210697 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-08 00:46:26.210704 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-08 00:46:26.210710 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-08 00:46:26.210716 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-08 00:46:26.210741 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-08 00:46:26.210752 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-08 00:46:26.210761 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-08 00:46:26.210776 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-08 00:46:26.210791 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-08 00:46:26.210801 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-08 00:46:26.210811 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-08 00:46:26.210822 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-08 00:46:26.210833 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-08 00:46:26.210891 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-08 00:46:26.210903 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-08 00:46:26.210921 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-08 00:46:26.210932 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-08 00:46:26.210943 | orchestrator | 2026-04-08 00:46:26.210954 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-08 00:46:26.210965 | orchestrator | Wednesday 08 April 2026 00:44:10 +0000 (0:00:04.179) 0:00:06.046 ******* 2026-04-08 00:46:26.210976 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:46:26.210988 | orchestrator | 2026-04-08 00:46:26.210998 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-08 00:46:26.211020 | orchestrator | Wednesday 08 April 2026 00:44:11 +0000 (0:00:01.626) 0:00:07.673 ******* 2026-04-08 00:46:26.211035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.211050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.211081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.211090 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.211098 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.211105 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.211126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.211135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.211144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.211162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.211170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.211178 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.211190 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.211201 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.211217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.211225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.211233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.211246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.211254 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.211262 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.211273 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.211281 | orchestrator | 2026-04-08 00:46:26.211288 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-08 00:46:26.211296 | orchestrator | Wednesday 08 April 2026 00:44:17 +0000 (0:00:06.069) 0:00:13.743 ******* 2026-04-08 00:46:26.211304 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:46:26.211312 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211324 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211333 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:46:26.211341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:46:26.211354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211370 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:26.211382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:46:26.211390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:46:26.211414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211421 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:26.211428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211435 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:26.211444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:46:26.211451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211469 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:26.211475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:46:26.211485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211499 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:26.211505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:46:26.211516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211533 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:26.211539 | orchestrator | 2026-04-08 00:46:26.211546 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-08 00:46:26.211552 | orchestrator | Wednesday 08 April 2026 00:44:20 +0000 (0:00:02.428) 0:00:16.171 ******* 2026-04-08 00:46:26.211558 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:46:26.211565 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211575 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211582 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:46:26.211588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:46:26.211595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:46:26.211624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211645 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:26.211662 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:26.211673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:46:26.211688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.211708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:46:26.211718 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:26.212188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.212219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.212225 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:26.212232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:46:26.212237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.212243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.212249 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:26.212259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:46:26.212265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.212270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.212280 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:26.212286 | orchestrator | 2026-04-08 00:46:26.212292 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-08 00:46:26.212297 | orchestrator | Wednesday 08 April 2026 00:44:23 +0000 (0:00:03.155) 0:00:19.326 ******* 2026-04-08 00:46:26.212303 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:46:26.212308 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:26.212314 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:26.212320 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:26.212325 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:26.212335 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:26.212341 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:26.212347 | orchestrator | 2026-04-08 00:46:26.212352 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-08 00:46:26.212357 | orchestrator | Wednesday 08 April 2026 00:44:25 +0000 (0:00:01.650) 0:00:20.977 ******* 2026-04-08 00:46:26.212364 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:46:26.212369 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:26.212374 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:26.212380 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:26.212385 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:26.212391 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:26.212396 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:26.212402 | orchestrator | 2026-04-08 00:46:26.212407 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-08 00:46:26.212413 | orchestrator | Wednesday 08 April 2026 00:44:27 +0000 (0:00:02.124) 0:00:23.101 ******* 2026-04-08 00:46:26.212419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.212425 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.212431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.212441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.212452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.212458 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.212467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.212473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.212479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.212484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.212490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.212522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.212528 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.212538 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.212544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.212550 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.212555 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.212566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.212574 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.212585 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.212591 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.212596 | orchestrator | 2026-04-08 00:46:26.212602 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-08 00:46:26.212607 | orchestrator | Wednesday 08 April 2026 00:44:34 +0000 (0:00:07.467) 0:00:30.569 ******* 2026-04-08 00:46:26.212613 | orchestrator | [WARNING]: Skipped 2026-04-08 00:46:26.212621 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-08 00:46:26.212627 | orchestrator | to this access issue: 2026-04-08 00:46:26.212633 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-08 00:46:26.212638 | orchestrator | directory 2026-04-08 00:46:26.212644 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:46:26.212649 | orchestrator | 2026-04-08 00:46:26.212655 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-08 00:46:26.212660 | orchestrator | Wednesday 08 April 2026 00:44:35 +0000 (0:00:01.177) 0:00:31.747 ******* 2026-04-08 00:46:26.212666 | orchestrator | [WARNING]: Skipped 2026-04-08 00:46:26.212672 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-08 00:46:26.212680 | orchestrator | to this access issue: 2026-04-08 00:46:26.212686 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-08 00:46:26.212691 | orchestrator | directory 2026-04-08 00:46:26.212697 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:46:26.212702 | orchestrator | 2026-04-08 00:46:26.212708 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-08 00:46:26.212714 | orchestrator | Wednesday 08 April 2026 00:44:37 +0000 (0:00:01.317) 0:00:33.064 ******* 2026-04-08 00:46:26.212719 | orchestrator | [WARNING]: Skipped 2026-04-08 00:46:26.212725 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-08 00:46:26.212730 | orchestrator | to this access issue: 2026-04-08 00:46:26.212736 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-08 00:46:26.212742 | orchestrator | directory 2026-04-08 00:46:26.212747 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:46:26.212752 | orchestrator | 2026-04-08 00:46:26.212758 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-08 00:46:26.212764 | orchestrator | Wednesday 08 April 2026 00:44:38 +0000 (0:00:00.905) 0:00:33.969 ******* 2026-04-08 00:46:26.212769 | orchestrator | [WARNING]: Skipped 2026-04-08 00:46:26.212775 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-08 00:46:26.212781 | orchestrator | to this access issue: 2026-04-08 00:46:26.212786 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-08 00:46:26.212792 | orchestrator | directory 2026-04-08 00:46:26.212798 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:46:26.212810 | orchestrator | 2026-04-08 00:46:26.212817 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-08 00:46:26.212823 | orchestrator | Wednesday 08 April 2026 00:44:38 +0000 (0:00:00.730) 0:00:34.700 ******* 2026-04-08 00:46:26.212830 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:46:26.212836 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:46:26.212864 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:26.212874 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:26.212883 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:26.212892 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:26.212902 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:46:26.212912 | orchestrator | 2026-04-08 00:46:26.212922 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-08 00:46:26.212931 | orchestrator | Wednesday 08 April 2026 00:44:44 +0000 (0:00:05.293) 0:00:39.993 ******* 2026-04-08 00:46:26.212941 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-08 00:46:26.212949 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-08 00:46:26.212955 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-08 00:46:26.212962 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-08 00:46:26.212969 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-08 00:46:26.212980 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-08 00:46:26.212987 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-08 00:46:26.212993 | orchestrator | 2026-04-08 00:46:26.213000 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-08 00:46:26.213006 | orchestrator | Wednesday 08 April 2026 00:44:47 +0000 (0:00:03.057) 0:00:43.051 ******* 2026-04-08 00:46:26.213013 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:26.213019 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:26.213026 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:26.213032 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:26.213038 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:46:26.213044 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:46:26.213051 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:46:26.213057 | orchestrator | 2026-04-08 00:46:26.213063 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-08 00:46:26.213070 | orchestrator | Wednesday 08 April 2026 00:44:50 +0000 (0:00:02.855) 0:00:45.906 ******* 2026-04-08 00:46:26.213077 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.213088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.213100 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.213106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.213113 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.213123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.213130 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.213137 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.213144 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.213156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.213162 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213168 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213174 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213180 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.213188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.213194 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213200 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213209 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213219 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.213224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:46:26.213230 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213236 | orchestrator | 2026-04-08 00:46:26.213241 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-08 00:46:26.213247 | orchestrator | Wednesday 08 April 2026 00:44:53 +0000 (0:00:03.017) 0:00:48.924 ******* 2026-04-08 00:46:26.213252 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-08 00:46:26.213258 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-08 00:46:26.213264 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-08 00:46:26.213269 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-08 00:46:26.213275 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-08 00:46:26.213283 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-08 00:46:26.213289 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-08 00:46:26.213294 | orchestrator | 2026-04-08 00:46:26.213300 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-08 00:46:26.213305 | orchestrator | Wednesday 08 April 2026 00:44:56 +0000 (0:00:03.168) 0:00:52.093 ******* 2026-04-08 00:46:26.213311 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-08 00:46:26.213317 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-08 00:46:26.213322 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-08 00:46:26.213328 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-08 00:46:26.213333 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-08 00:46:26.213339 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-08 00:46:26.213349 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-08 00:46:26.213355 | orchestrator | 2026-04-08 00:46:26.213364 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-08 00:46:26.213372 | orchestrator | Wednesday 08 April 2026 00:44:59 +0000 (0:00:03.030) 0:00:55.123 ******* 2026-04-08 00:46:26.213382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.213396 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.213406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.213415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.213424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.213440 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.213468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:46:26.213490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213502 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213518 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213528 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213560 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213565 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:46:26.213576 | orchestrator | 2026-04-08 00:46:26.213582 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-08 00:46:26.213587 | orchestrator | Wednesday 08 April 2026 00:45:02 +0000 (0:00:03.199) 0:00:58.322 ******* 2026-04-08 00:46:26.213593 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:26.213599 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:26.213608 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:26.213614 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:26.213622 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:46:26.213628 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:46:26.213633 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:46:26.213639 | orchestrator | 2026-04-08 00:46:26.213645 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-08 00:46:26.213650 | orchestrator | Wednesday 08 April 2026 00:45:04 +0000 (0:00:01.951) 0:01:00.274 ******* 2026-04-08 00:46:26.213656 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:26.213661 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:26.213667 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:26.213672 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:26.213678 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:46:26.213683 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:46:26.213689 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:46:26.213694 | orchestrator | 2026-04-08 00:46:26.213700 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-08 00:46:26.213706 | orchestrator | Wednesday 08 April 2026 00:45:06 +0000 (0:00:01.919) 0:01:02.193 ******* 2026-04-08 00:46:26.213712 | orchestrator | 2026-04-08 00:46:26.213720 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-08 00:46:26.213729 | orchestrator | Wednesday 08 April 2026 00:45:06 +0000 (0:00:00.065) 0:01:02.259 ******* 2026-04-08 00:46:26.213736 | orchestrator | 2026-04-08 00:46:26.213746 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-08 00:46:26.213761 | orchestrator | Wednesday 08 April 2026 00:45:06 +0000 (0:00:00.076) 0:01:02.335 ******* 2026-04-08 00:46:26.213772 | orchestrator | 2026-04-08 00:46:26.213780 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-08 00:46:26.213789 | orchestrator | Wednesday 08 April 2026 00:45:06 +0000 (0:00:00.072) 0:01:02.407 ******* 2026-04-08 00:46:26.213797 | orchestrator | 2026-04-08 00:46:26.213806 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-08 00:46:26.213815 | orchestrator | Wednesday 08 April 2026 00:45:06 +0000 (0:00:00.064) 0:01:02.471 ******* 2026-04-08 00:46:26.213823 | orchestrator | 2026-04-08 00:46:26.213832 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-08 00:46:26.213840 | orchestrator | Wednesday 08 April 2026 00:45:06 +0000 (0:00:00.067) 0:01:02.539 ******* 2026-04-08 00:46:26.213869 | orchestrator | 2026-04-08 00:46:26.213878 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-08 00:46:26.213887 | orchestrator | Wednesday 08 April 2026 00:45:06 +0000 (0:00:00.067) 0:01:02.606 ******* 2026-04-08 00:46:26.213896 | orchestrator | 2026-04-08 00:46:26.213904 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-08 00:46:26.213920 | orchestrator | Wednesday 08 April 2026 00:45:06 +0000 (0:00:00.086) 0:01:02.692 ******* 2026-04-08 00:46:26.213930 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:26.213939 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:26.213948 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:46:26.213956 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:26.213965 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:46:26.213973 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:46:26.213983 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:26.213992 | orchestrator | 2026-04-08 00:46:26.214001 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-08 00:46:26.214010 | orchestrator | Wednesday 08 April 2026 00:45:42 +0000 (0:00:35.180) 0:01:37.873 ******* 2026-04-08 00:46:26.214046 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:26.214051 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:26.214057 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:46:26.214062 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:46:26.214068 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:46:26.214081 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:26.214087 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:26.214092 | orchestrator | 2026-04-08 00:46:26.214098 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-08 00:46:26.214104 | orchestrator | Wednesday 08 April 2026 00:46:11 +0000 (0:00:29.584) 0:02:07.458 ******* 2026-04-08 00:46:26.214110 | orchestrator | ok: [testbed-manager] 2026-04-08 00:46:26.214115 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:26.214121 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:26.214126 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:26.214132 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:46:26.214137 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:46:26.214143 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:46:26.214148 | orchestrator | 2026-04-08 00:46:26.214154 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-08 00:46:26.214160 | orchestrator | Wednesday 08 April 2026 00:46:13 +0000 (0:00:01.881) 0:02:09.339 ******* 2026-04-08 00:46:26.214165 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:26.214171 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:26.214176 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:46:26.214181 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:46:26.214187 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:26.214193 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:26.214198 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:46:26.214204 | orchestrator | 2026-04-08 00:46:26.214209 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:46:26.214215 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-08 00:46:26.214222 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-08 00:46:26.214228 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-08 00:46:26.214234 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-08 00:46:26.214240 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-08 00:46:26.214245 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-08 00:46:26.214251 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-08 00:46:26.214256 | orchestrator | 2026-04-08 00:46:26.214262 | orchestrator | 2026-04-08 00:46:26.214268 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:46:26.214273 | orchestrator | Wednesday 08 April 2026 00:46:22 +0000 (0:00:09.291) 0:02:18.630 ******* 2026-04-08 00:46:26.214279 | orchestrator | =============================================================================== 2026-04-08 00:46:26.214285 | orchestrator | common : Restart fluentd container ------------------------------------- 35.18s 2026-04-08 00:46:26.214290 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 29.58s 2026-04-08 00:46:26.214296 | orchestrator | common : Restart cron container ----------------------------------------- 9.29s 2026-04-08 00:46:26.214301 | orchestrator | common : Copying over config.json files for services -------------------- 7.47s 2026-04-08 00:46:26.214307 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.07s 2026-04-08 00:46:26.214312 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.29s 2026-04-08 00:46:26.214317 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.18s 2026-04-08 00:46:26.214329 | orchestrator | common : Check common containers ---------------------------------------- 3.20s 2026-04-08 00:46:26.214335 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.17s 2026-04-08 00:46:26.214340 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.16s 2026-04-08 00:46:26.214346 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.06s 2026-04-08 00:46:26.214352 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.03s 2026-04-08 00:46:26.214357 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.02s 2026-04-08 00:46:26.214363 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.86s 2026-04-08 00:46:26.214375 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.43s 2026-04-08 00:46:26.214381 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.12s 2026-04-08 00:46:26.214386 | orchestrator | common : Creating log volume -------------------------------------------- 1.95s 2026-04-08 00:46:26.214397 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.92s 2026-04-08 00:46:26.214403 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.88s 2026-04-08 00:46:26.214409 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.65s 2026-04-08 00:46:26.214414 | orchestrator | 2026-04-08 00:46:26 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:46:26.214420 | orchestrator | 2026-04-08 00:46:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:29.241172 | orchestrator | 2026-04-08 00:46:29 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:46:29.241807 | orchestrator | 2026-04-08 00:46:29 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:29.242420 | orchestrator | 2026-04-08 00:46:29 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:29.243192 | orchestrator | 2026-04-08 00:46:29 | INFO  | Task 9524dc0a-a5c9-4eee-83b1-88af8af1aef8 is in state STARTED 2026-04-08 00:46:29.243583 | orchestrator | 2026-04-08 00:46:29 | INFO  | Task 7d4b9747-20ef-4755-89c7-fef128394027 is in state STARTED 2026-04-08 00:46:29.244590 | orchestrator | 2026-04-08 00:46:29 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:46:29.244639 | orchestrator | 2026-04-08 00:46:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:32.274238 | orchestrator | 2026-04-08 00:46:32 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:46:32.274553 | orchestrator | 2026-04-08 00:46:32 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:32.275251 | orchestrator | 2026-04-08 00:46:32 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:32.275996 | orchestrator | 2026-04-08 00:46:32 | INFO  | Task 9524dc0a-a5c9-4eee-83b1-88af8af1aef8 is in state STARTED 2026-04-08 00:46:32.277766 | orchestrator | 2026-04-08 00:46:32 | INFO  | Task 7d4b9747-20ef-4755-89c7-fef128394027 is in state STARTED 2026-04-08 00:46:32.279972 | orchestrator | 2026-04-08 00:46:32 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:46:32.280032 | orchestrator | 2026-04-08 00:46:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:35.317958 | orchestrator | 2026-04-08 00:46:35 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:46:35.318544 | orchestrator | 2026-04-08 00:46:35 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:35.319129 | orchestrator | 2026-04-08 00:46:35 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:35.320123 | orchestrator | 2026-04-08 00:46:35 | INFO  | Task 9524dc0a-a5c9-4eee-83b1-88af8af1aef8 is in state STARTED 2026-04-08 00:46:35.320922 | orchestrator | 2026-04-08 00:46:35 | INFO  | Task 7d4b9747-20ef-4755-89c7-fef128394027 is in state STARTED 2026-04-08 00:46:35.321663 | orchestrator | 2026-04-08 00:46:35 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:46:35.321761 | orchestrator | 2026-04-08 00:46:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:38.423722 | orchestrator | 2026-04-08 00:46:38 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:46:38.425591 | orchestrator | 2026-04-08 00:46:38 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:38.428586 | orchestrator | 2026-04-08 00:46:38 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:38.428793 | orchestrator | 2026-04-08 00:46:38 | INFO  | Task 9524dc0a-a5c9-4eee-83b1-88af8af1aef8 is in state STARTED 2026-04-08 00:46:38.429946 | orchestrator | 2026-04-08 00:46:38 | INFO  | Task 7d4b9747-20ef-4755-89c7-fef128394027 is in state STARTED 2026-04-08 00:46:38.431723 | orchestrator | 2026-04-08 00:46:38 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:46:38.432030 | orchestrator | 2026-04-08 00:46:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:41.474266 | orchestrator | 2026-04-08 00:46:41 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:46:41.474600 | orchestrator | 2026-04-08 00:46:41 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:41.477197 | orchestrator | 2026-04-08 00:46:41 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:41.478158 | orchestrator | 2026-04-08 00:46:41 | INFO  | Task 9524dc0a-a5c9-4eee-83b1-88af8af1aef8 is in state STARTED 2026-04-08 00:46:41.481414 | orchestrator | 2026-04-08 00:46:41 | INFO  | Task 7d4b9747-20ef-4755-89c7-fef128394027 is in state STARTED 2026-04-08 00:46:41.482294 | orchestrator | 2026-04-08 00:46:41 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:46:41.482321 | orchestrator | 2026-04-08 00:46:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:44.561393 | orchestrator | 2026-04-08 00:46:44 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:46:44.565475 | orchestrator | 2026-04-08 00:46:44 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:44.566287 | orchestrator | 2026-04-08 00:46:44 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:44.567986 | orchestrator | 2026-04-08 00:46:44 | INFO  | Task 9524dc0a-a5c9-4eee-83b1-88af8af1aef8 is in state STARTED 2026-04-08 00:46:44.568034 | orchestrator | 2026-04-08 00:46:44 | INFO  | Task 7d4b9747-20ef-4755-89c7-fef128394027 is in state SUCCESS 2026-04-08 00:46:44.568471 | orchestrator | 2026-04-08 00:46:44 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:46:44.569429 | orchestrator | 2026-04-08 00:46:44 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:46:44.569460 | orchestrator | 2026-04-08 00:46:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:47.601321 | orchestrator | 2026-04-08 00:46:47 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:46:47.601519 | orchestrator | 2026-04-08 00:46:47 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:47.602159 | orchestrator | 2026-04-08 00:46:47 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:47.602959 | orchestrator | 2026-04-08 00:46:47 | INFO  | Task 9524dc0a-a5c9-4eee-83b1-88af8af1aef8 is in state STARTED 2026-04-08 00:46:47.603779 | orchestrator | 2026-04-08 00:46:47 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:46:47.604732 | orchestrator | 2026-04-08 00:46:47 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:46:47.604778 | orchestrator | 2026-04-08 00:46:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:50.633254 | orchestrator | 2026-04-08 00:46:50 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:46:50.637825 | orchestrator | 2026-04-08 00:46:50 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:50.638132 | orchestrator | 2026-04-08 00:46:50 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:50.642781 | orchestrator | 2026-04-08 00:46:50 | INFO  | Task 9524dc0a-a5c9-4eee-83b1-88af8af1aef8 is in state STARTED 2026-04-08 00:46:50.646761 | orchestrator | 2026-04-08 00:46:50 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:46:50.654407 | orchestrator | 2026-04-08 00:46:50 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:46:50.654491 | orchestrator | 2026-04-08 00:46:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:53.705009 | orchestrator | 2026-04-08 00:46:53 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:46:53.705071 | orchestrator | 2026-04-08 00:46:53 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:53.705081 | orchestrator | 2026-04-08 00:46:53 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:53.705089 | orchestrator | 2026-04-08 00:46:53 | INFO  | Task 9524dc0a-a5c9-4eee-83b1-88af8af1aef8 is in state SUCCESS 2026-04-08 00:46:53.705096 | orchestrator | 2026-04-08 00:46:53 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:46:53.705103 | orchestrator | 2026-04-08 00:46:53 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:46:53.705110 | orchestrator | 2026-04-08 00:46:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:53.705593 | orchestrator | 2026-04-08 00:46:53.705619 | orchestrator | 2026-04-08 00:46:53.705627 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:46:53.705634 | orchestrator | 2026-04-08 00:46:53.705641 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:46:53.705648 | orchestrator | Wednesday 08 April 2026 00:46:27 +0000 (0:00:00.371) 0:00:00.371 ******* 2026-04-08 00:46:53.705655 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:53.705663 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:53.705670 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:53.705677 | orchestrator | 2026-04-08 00:46:53.705684 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:46:53.705691 | orchestrator | Wednesday 08 April 2026 00:46:27 +0000 (0:00:00.337) 0:00:00.709 ******* 2026-04-08 00:46:53.705699 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-08 00:46:53.705706 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-08 00:46:53.705713 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-08 00:46:53.705720 | orchestrator | 2026-04-08 00:46:53.705727 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-08 00:46:53.705750 | orchestrator | 2026-04-08 00:46:53.705758 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-08 00:46:53.705765 | orchestrator | Wednesday 08 April 2026 00:46:27 +0000 (0:00:00.330) 0:00:01.040 ******* 2026-04-08 00:46:53.705772 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:46:53.705779 | orchestrator | 2026-04-08 00:46:53.705786 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-08 00:46:53.705793 | orchestrator | Wednesday 08 April 2026 00:46:28 +0000 (0:00:00.533) 0:00:01.573 ******* 2026-04-08 00:46:53.705800 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-08 00:46:53.705807 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-08 00:46:53.705814 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-08 00:46:53.705821 | orchestrator | 2026-04-08 00:46:53.705828 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-08 00:46:53.705834 | orchestrator | Wednesday 08 April 2026 00:46:29 +0000 (0:00:01.514) 0:00:03.088 ******* 2026-04-08 00:46:53.705840 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-08 00:46:53.705847 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-08 00:46:53.705853 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-08 00:46:53.705860 | orchestrator | 2026-04-08 00:46:53.705865 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-08 00:46:53.705917 | orchestrator | Wednesday 08 April 2026 00:46:31 +0000 (0:00:01.748) 0:00:04.836 ******* 2026-04-08 00:46:53.705927 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:53.705934 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:53.705941 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:53.705948 | orchestrator | 2026-04-08 00:46:53.705955 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-08 00:46:53.705963 | orchestrator | Wednesday 08 April 2026 00:46:33 +0000 (0:00:01.889) 0:00:06.726 ******* 2026-04-08 00:46:53.705970 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:53.705976 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:53.705983 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:53.705991 | orchestrator | 2026-04-08 00:46:53.706006 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:46:53.706047 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:46:53.706055 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:46:53.706063 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:46:53.706069 | orchestrator | 2026-04-08 00:46:53.706075 | orchestrator | 2026-04-08 00:46:53.706082 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:46:53.706088 | orchestrator | Wednesday 08 April 2026 00:46:41 +0000 (0:00:08.354) 0:00:15.081 ******* 2026-04-08 00:46:53.706095 | orchestrator | =============================================================================== 2026-04-08 00:46:53.706101 | orchestrator | memcached : Restart memcached container --------------------------------- 8.35s 2026-04-08 00:46:53.706108 | orchestrator | memcached : Check memcached container ----------------------------------- 1.89s 2026-04-08 00:46:53.706114 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.75s 2026-04-08 00:46:53.706120 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.51s 2026-04-08 00:46:53.706126 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.53s 2026-04-08 00:46:53.706133 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-04-08 00:46:53.706147 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.33s 2026-04-08 00:46:53.706153 | orchestrator | 2026-04-08 00:46:53.706159 | orchestrator | 2026-04-08 00:46:53.706165 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:46:53.706171 | orchestrator | 2026-04-08 00:46:53.706177 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:46:53.706183 | orchestrator | Wednesday 08 April 2026 00:46:27 +0000 (0:00:00.291) 0:00:00.291 ******* 2026-04-08 00:46:53.706189 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:53.706196 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:53.706202 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:53.706208 | orchestrator | 2026-04-08 00:46:53.706215 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:46:53.706232 | orchestrator | Wednesday 08 April 2026 00:46:28 +0000 (0:00:00.274) 0:00:00.565 ******* 2026-04-08 00:46:53.706240 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-08 00:46:53.706247 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-08 00:46:53.706254 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-08 00:46:53.706260 | orchestrator | 2026-04-08 00:46:53.706267 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-08 00:46:53.706274 | orchestrator | 2026-04-08 00:46:53.706281 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-08 00:46:53.706287 | orchestrator | Wednesday 08 April 2026 00:46:28 +0000 (0:00:00.434) 0:00:01.000 ******* 2026-04-08 00:46:53.706294 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:46:53.706302 | orchestrator | 2026-04-08 00:46:53.706309 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-08 00:46:53.706316 | orchestrator | Wednesday 08 April 2026 00:46:29 +0000 (0:00:00.724) 0:00:01.724 ******* 2026-04-08 00:46:53.706324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706389 | orchestrator | 2026-04-08 00:46:53.706397 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-08 00:46:53.706404 | orchestrator | Wednesday 08 April 2026 00:46:31 +0000 (0:00:01.960) 0:00:03.684 ******* 2026-04-08 00:46:53.706411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706468 | orchestrator | 2026-04-08 00:46:53.706475 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-08 00:46:53.706482 | orchestrator | Wednesday 08 April 2026 00:46:34 +0000 (0:00:02.702) 0:00:06.387 ******* 2026-04-08 00:46:53.706490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706540 | orchestrator | 2026-04-08 00:46:53.706551 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-08 00:46:53.706559 | orchestrator | Wednesday 08 April 2026 00:46:37 +0000 (0:00:03.026) 0:00:09.414 ******* 2026-04-08 00:46:53.706566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:46:53.706626 | orchestrator | 2026-04-08 00:46:53.706633 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-08 00:46:53.706640 | orchestrator | Wednesday 08 April 2026 00:46:39 +0000 (0:00:02.034) 0:00:11.449 ******* 2026-04-08 00:46:53.706647 | orchestrator | 2026-04-08 00:46:53.706654 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-08 00:46:53.706665 | orchestrator | Wednesday 08 April 2026 00:46:39 +0000 (0:00:00.291) 0:00:11.740 ******* 2026-04-08 00:46:53.706672 | orchestrator | 2026-04-08 00:46:53.706679 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-08 00:46:53.706686 | orchestrator | Wednesday 08 April 2026 00:46:39 +0000 (0:00:00.147) 0:00:11.888 ******* 2026-04-08 00:46:53.706693 | orchestrator | 2026-04-08 00:46:53.706700 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-08 00:46:53.706707 | orchestrator | Wednesday 08 April 2026 00:46:39 +0000 (0:00:00.069) 0:00:11.958 ******* 2026-04-08 00:46:53.706714 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:53.706721 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:53.706728 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:53.706735 | orchestrator | 2026-04-08 00:46:53.706742 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-08 00:46:53.706749 | orchestrator | Wednesday 08 April 2026 00:46:42 +0000 (0:00:03.001) 0:00:14.959 ******* 2026-04-08 00:46:53.706756 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:53.706763 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:53.706769 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:53.706776 | orchestrator | 2026-04-08 00:46:53.706783 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:46:53.706790 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:46:53.706798 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:46:53.706809 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:46:53.706816 | orchestrator | 2026-04-08 00:46:53.706823 | orchestrator | 2026-04-08 00:46:53.706829 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:46:53.706836 | orchestrator | Wednesday 08 April 2026 00:46:51 +0000 (0:00:08.408) 0:00:23.368 ******* 2026-04-08 00:46:53.706842 | orchestrator | =============================================================================== 2026-04-08 00:46:53.706848 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.41s 2026-04-08 00:46:53.706855 | orchestrator | redis : Copying over redis config files --------------------------------- 3.03s 2026-04-08 00:46:53.706862 | orchestrator | redis : Restart redis container ----------------------------------------- 3.00s 2026-04-08 00:46:53.706867 | orchestrator | redis : Copying over default config.json files -------------------------- 2.70s 2026-04-08 00:46:53.706887 | orchestrator | redis : Check redis containers ------------------------------------------ 2.03s 2026-04-08 00:46:53.706893 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.96s 2026-04-08 00:46:53.706900 | orchestrator | redis : include_tasks --------------------------------------------------- 0.72s 2026-04-08 00:46:53.706906 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.51s 2026-04-08 00:46:53.706913 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-04-08 00:46:53.706923 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-04-08 00:46:56.775737 | orchestrator | 2026-04-08 00:46:56 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:46:56.776661 | orchestrator | 2026-04-08 00:46:56 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:56.776862 | orchestrator | 2026-04-08 00:46:56 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:56.777802 | orchestrator | 2026-04-08 00:46:56 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:46:56.778659 | orchestrator | 2026-04-08 00:46:56 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:46:56.778708 | orchestrator | 2026-04-08 00:46:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:59.816187 | orchestrator | 2026-04-08 00:46:59 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:46:59.816261 | orchestrator | 2026-04-08 00:46:59 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:46:59.816267 | orchestrator | 2026-04-08 00:46:59 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:46:59.816271 | orchestrator | 2026-04-08 00:46:59 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:46:59.816283 | orchestrator | 2026-04-08 00:46:59 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:46:59.816287 | orchestrator | 2026-04-08 00:46:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:02.867875 | orchestrator | 2026-04-08 00:47:02 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:47:02.868513 | orchestrator | 2026-04-08 00:47:02 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:02.869451 | orchestrator | 2026-04-08 00:47:02 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:02.870286 | orchestrator | 2026-04-08 00:47:02 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:02.871141 | orchestrator | 2026-04-08 00:47:02 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:02.871202 | orchestrator | 2026-04-08 00:47:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:05.905981 | orchestrator | 2026-04-08 00:47:05 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:47:05.907145 | orchestrator | 2026-04-08 00:47:05 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:05.907924 | orchestrator | 2026-04-08 00:47:05 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:05.909373 | orchestrator | 2026-04-08 00:47:05 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:05.909926 | orchestrator | 2026-04-08 00:47:05 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:05.910247 | orchestrator | 2026-04-08 00:47:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:08.958828 | orchestrator | 2026-04-08 00:47:08 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:47:08.958873 | orchestrator | 2026-04-08 00:47:08 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:08.959992 | orchestrator | 2026-04-08 00:47:08 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:08.961008 | orchestrator | 2026-04-08 00:47:08 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:08.962104 | orchestrator | 2026-04-08 00:47:08 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:08.962139 | orchestrator | 2026-04-08 00:47:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:12.135006 | orchestrator | 2026-04-08 00:47:12 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:47:12.136773 | orchestrator | 2026-04-08 00:47:12 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:12.137459 | orchestrator | 2026-04-08 00:47:12 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:12.140985 | orchestrator | 2026-04-08 00:47:12 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:12.141511 | orchestrator | 2026-04-08 00:47:12 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:12.141539 | orchestrator | 2026-04-08 00:47:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:15.182707 | orchestrator | 2026-04-08 00:47:15 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:47:15.182760 | orchestrator | 2026-04-08 00:47:15 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:15.182768 | orchestrator | 2026-04-08 00:47:15 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:15.182774 | orchestrator | 2026-04-08 00:47:15 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:15.182779 | orchestrator | 2026-04-08 00:47:15 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:15.182785 | orchestrator | 2026-04-08 00:47:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:18.223227 | orchestrator | 2026-04-08 00:47:18 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:47:18.224299 | orchestrator | 2026-04-08 00:47:18 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:18.225306 | orchestrator | 2026-04-08 00:47:18 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:18.226230 | orchestrator | 2026-04-08 00:47:18 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:18.227384 | orchestrator | 2026-04-08 00:47:18 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:18.227422 | orchestrator | 2026-04-08 00:47:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:21.290278 | orchestrator | 2026-04-08 00:47:21 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:47:21.296842 | orchestrator | 2026-04-08 00:47:21 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:21.303266 | orchestrator | 2026-04-08 00:47:21 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:21.306823 | orchestrator | 2026-04-08 00:47:21 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:21.310866 | orchestrator | 2026-04-08 00:47:21 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:21.310963 | orchestrator | 2026-04-08 00:47:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:24.349587 | orchestrator | 2026-04-08 00:47:24 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state STARTED 2026-04-08 00:47:24.351568 | orchestrator | 2026-04-08 00:47:24 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:24.353155 | orchestrator | 2026-04-08 00:47:24 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:24.355963 | orchestrator | 2026-04-08 00:47:24 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:24.357131 | orchestrator | 2026-04-08 00:47:24 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:24.357171 | orchestrator | 2026-04-08 00:47:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:27.469255 | orchestrator | 2026-04-08 00:47:27 | INFO  | Task f00aaaa3-d452-4040-8aa0-76239c3f2837 is in state SUCCESS 2026-04-08 00:47:27.470736 | orchestrator | 2026-04-08 00:47:27.470791 | orchestrator | 2026-04-08 00:47:27.470798 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:47:27.470805 | orchestrator | 2026-04-08 00:47:27.470811 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:47:27.470818 | orchestrator | Wednesday 08 April 2026 00:46:27 +0000 (0:00:00.310) 0:00:00.310 ******* 2026-04-08 00:47:27.470824 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:47:27.470831 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:47:27.470837 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:47:27.470842 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:47:27.470848 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:47:27.470854 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:47:27.470860 | orchestrator | 2026-04-08 00:47:27.470874 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:47:27.470890 | orchestrator | Wednesday 08 April 2026 00:46:28 +0000 (0:00:00.664) 0:00:00.974 ******* 2026-04-08 00:47:27.470944 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-08 00:47:27.470954 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-08 00:47:27.470959 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-08 00:47:27.470965 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-08 00:47:27.470971 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-08 00:47:27.470977 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-08 00:47:27.470983 | orchestrator | 2026-04-08 00:47:27.471008 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-08 00:47:27.471013 | orchestrator | 2026-04-08 00:47:27.471019 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-08 00:47:27.471025 | orchestrator | Wednesday 08 April 2026 00:46:29 +0000 (0:00:00.911) 0:00:01.886 ******* 2026-04-08 00:47:27.471033 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:47:27.471042 | orchestrator | 2026-04-08 00:47:27.471048 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-08 00:47:27.471055 | orchestrator | Wednesday 08 April 2026 00:46:30 +0000 (0:00:01.215) 0:00:03.101 ******* 2026-04-08 00:47:27.471067 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-08 00:47:27.471098 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-08 00:47:27.471105 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-08 00:47:27.471112 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-08 00:47:27.471120 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-08 00:47:27.471134 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-08 00:47:27.471150 | orchestrator | 2026-04-08 00:47:27.471156 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-08 00:47:27.471162 | orchestrator | Wednesday 08 April 2026 00:46:31 +0000 (0:00:01.533) 0:00:04.635 ******* 2026-04-08 00:47:27.471167 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-08 00:47:27.471174 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-08 00:47:27.471179 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-08 00:47:27.471185 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-08 00:47:27.471191 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-08 00:47:27.471197 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-08 00:47:27.471202 | orchestrator | 2026-04-08 00:47:27.471209 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-08 00:47:27.471214 | orchestrator | Wednesday 08 April 2026 00:46:33 +0000 (0:00:01.968) 0:00:06.604 ******* 2026-04-08 00:47:27.471221 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-08 00:47:27.471227 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:47:27.471233 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-08 00:47:27.471257 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:47:27.471266 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-08 00:47:27.471272 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-08 00:47:27.471279 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:47:27.471286 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-08 00:47:27.471293 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:47:27.471300 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:47:27.471308 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-08 00:47:27.471326 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:47:27.471344 | orchestrator | 2026-04-08 00:47:27.471470 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-08 00:47:27.471481 | orchestrator | Wednesday 08 April 2026 00:46:35 +0000 (0:00:01.420) 0:00:08.024 ******* 2026-04-08 00:47:27.471487 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:47:27.471493 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:47:27.471498 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:47:27.471504 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:47:27.471510 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:47:27.471516 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:47:27.471522 | orchestrator | 2026-04-08 00:47:27.471527 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-08 00:47:27.471545 | orchestrator | Wednesday 08 April 2026 00:46:36 +0000 (0:00:01.067) 0:00:09.091 ******* 2026-04-08 00:47:27.471574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471602 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471645 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471669 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471696 | orchestrator | 2026-04-08 00:47:27.471702 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-08 00:47:27.471708 | orchestrator | Wednesday 08 April 2026 00:46:38 +0000 (0:00:01.980) 0:00:11.071 ******* 2026-04-08 00:47:27.471714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471741 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471794 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.471811 | orchestrator | 2026-04-08 00:47:27.471817 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-08 00:47:27.471823 | orchestrator | Wednesday 08 April 2026 00:46:41 +0000 (0:00:02.951) 0:00:14.023 ******* 2026-04-08 00:47:27.471829 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:47:27.471835 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:47:27.471841 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:47:27.471846 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:47:27.471852 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:47:27.471858 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:47:27.471864 | orchestrator | 2026-04-08 00:47:27.471870 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-08 00:47:27.471876 | orchestrator | Wednesday 08 April 2026 00:46:42 +0000 (0:00:00.720) 0:00:14.743 ******* 2026-04-08 00:47:27.472012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.472022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.472028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.472042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.472063 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.472073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.472080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.472086 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.472102 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.472108 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:47:27.472119 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.472129 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:47:27.472135 | orchestrator | 2026-04-08 00:47:27.472142 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-08 00:47:27.472148 | orchestrator | Wednesday 08 April 2026 00:46:46 +0000 (0:00:03.925) 0:00:18.669 ******* 2026-04-08 00:47:27.472155 | orchestrator | 2026-04-08 00:47:27.472161 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-08 00:47:27.472167 | orchestrator | Wednesday 08 April 2026 00:46:46 +0000 (0:00:00.126) 0:00:18.796 ******* 2026-04-08 00:47:27.472173 | orchestrator | 2026-04-08 00:47:27.472179 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-08 00:47:27.472185 | orchestrator | Wednesday 08 April 2026 00:46:46 +0000 (0:00:00.171) 0:00:18.967 ******* 2026-04-08 00:47:27.472191 | orchestrator | 2026-04-08 00:47:27.472201 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-08 00:47:27.472215 | orchestrator | Wednesday 08 April 2026 00:46:46 +0000 (0:00:00.147) 0:00:19.115 ******* 2026-04-08 00:47:27.472222 | orchestrator | 2026-04-08 00:47:27.472227 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-08 00:47:27.472238 | orchestrator | Wednesday 08 April 2026 00:46:46 +0000 (0:00:00.346) 0:00:19.461 ******* 2026-04-08 00:47:27.472244 | orchestrator | 2026-04-08 00:47:27.472250 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-08 00:47:27.472256 | orchestrator | Wednesday 08 April 2026 00:46:46 +0000 (0:00:00.122) 0:00:19.584 ******* 2026-04-08 00:47:27.472262 | orchestrator | 2026-04-08 00:47:27.472268 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-08 00:47:27.472273 | orchestrator | Wednesday 08 April 2026 00:46:47 +0000 (0:00:00.132) 0:00:19.716 ******* 2026-04-08 00:47:27.472279 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:47:27.472285 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:47:27.472291 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:47:27.472296 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:47:27.472302 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:47:27.472308 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:47:27.472313 | orchestrator | 2026-04-08 00:47:27.472319 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-08 00:47:27.472329 | orchestrator | Wednesday 08 April 2026 00:46:52 +0000 (0:00:05.132) 0:00:24.849 ******* 2026-04-08 00:47:27.472341 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:47:27.472350 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:47:27.472356 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:47:27.472362 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:47:27.472368 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:47:27.472376 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:47:27.472384 | orchestrator | 2026-04-08 00:47:27.472390 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-08 00:47:27.472396 | orchestrator | Wednesday 08 April 2026 00:46:53 +0000 (0:00:01.430) 0:00:26.280 ******* 2026-04-08 00:47:27.472403 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:47:27.472410 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:47:27.472420 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:47:27.472427 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:47:27.472434 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:47:27.472440 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:47:27.472446 | orchestrator | 2026-04-08 00:47:27.472454 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-08 00:47:27.472461 | orchestrator | Wednesday 08 April 2026 00:47:02 +0000 (0:00:09.253) 0:00:35.534 ******* 2026-04-08 00:47:27.472467 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-08 00:47:27.472474 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-08 00:47:27.472481 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-08 00:47:27.472487 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-08 00:47:27.472493 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-08 00:47:27.472576 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-08 00:47:27.472588 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-08 00:47:27.472595 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-08 00:47:27.472601 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-08 00:47:27.472607 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-08 00:47:27.472613 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-08 00:47:27.472628 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-08 00:47:27.472635 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-08 00:47:27.472641 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-08 00:47:27.472653 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-08 00:47:27.472660 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-08 00:47:27.472665 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-08 00:47:27.472671 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-08 00:47:27.472677 | orchestrator | 2026-04-08 00:47:27.472682 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-08 00:47:27.472686 | orchestrator | Wednesday 08 April 2026 00:47:10 +0000 (0:00:07.541) 0:00:43.076 ******* 2026-04-08 00:47:27.472690 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-08 00:47:27.472696 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:47:27.472702 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-08 00:47:27.472707 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:47:27.472713 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-08 00:47:27.472719 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:47:27.472725 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-08 00:47:27.472731 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-08 00:47:27.472736 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-08 00:47:27.472742 | orchestrator | 2026-04-08 00:47:27.472748 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-08 00:47:27.472754 | orchestrator | Wednesday 08 April 2026 00:47:12 +0000 (0:00:01.888) 0:00:44.964 ******* 2026-04-08 00:47:27.472759 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-08 00:47:27.472765 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:47:27.472772 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-08 00:47:27.472778 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:47:27.472784 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-08 00:47:27.472790 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:47:27.472796 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-08 00:47:27.472803 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-08 00:47:27.472809 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-08 00:47:27.472815 | orchestrator | 2026-04-08 00:47:27.472821 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-08 00:47:27.472827 | orchestrator | Wednesday 08 April 2026 00:47:15 +0000 (0:00:03.288) 0:00:48.253 ******* 2026-04-08 00:47:27.472833 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:47:27.472839 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:47:27.472846 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:47:27.472852 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:47:27.472858 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:47:27.472864 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:47:27.472870 | orchestrator | 2026-04-08 00:47:27.472876 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:47:27.472884 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-08 00:47:27.472895 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-08 00:47:27.472900 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-08 00:47:27.472903 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-08 00:47:27.472949 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-08 00:47:27.472962 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-08 00:47:27.472968 | orchestrator | 2026-04-08 00:47:27.472972 | orchestrator | 2026-04-08 00:47:27.472976 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:47:27.472980 | orchestrator | Wednesday 08 April 2026 00:47:24 +0000 (0:00:08.907) 0:00:57.161 ******* 2026-04-08 00:47:27.472983 | orchestrator | =============================================================================== 2026-04-08 00:47:27.472987 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.16s 2026-04-08 00:47:27.472991 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.54s 2026-04-08 00:47:27.472995 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 5.13s 2026-04-08 00:47:27.472998 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.93s 2026-04-08 00:47:27.473002 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.29s 2026-04-08 00:47:27.473006 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.95s 2026-04-08 00:47:27.473009 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.98s 2026-04-08 00:47:27.473013 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.97s 2026-04-08 00:47:27.473021 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 1.89s 2026-04-08 00:47:27.473025 | orchestrator | module-load : Load modules ---------------------------------------------- 1.53s 2026-04-08 00:47:27.473028 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.43s 2026-04-08 00:47:27.473032 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.42s 2026-04-08 00:47:27.473036 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.22s 2026-04-08 00:47:27.473040 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.07s 2026-04-08 00:47:27.473043 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.05s 2026-04-08 00:47:27.473047 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.91s 2026-04-08 00:47:27.473051 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.72s 2026-04-08 00:47:27.473054 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.66s 2026-04-08 00:47:27.473058 | orchestrator | 2026-04-08 00:47:27 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:27.473399 | orchestrator | 2026-04-08 00:47:27 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:47:27.476645 | orchestrator | 2026-04-08 00:47:27 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:27.478650 | orchestrator | 2026-04-08 00:47:27 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:27.481438 | orchestrator | 2026-04-08 00:47:27 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:27.481537 | orchestrator | 2026-04-08 00:47:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:30.510371 | orchestrator | 2026-04-08 00:47:30 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:30.513019 | orchestrator | 2026-04-08 00:47:30 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:47:30.513369 | orchestrator | 2026-04-08 00:47:30 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:30.514289 | orchestrator | 2026-04-08 00:47:30 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:30.517381 | orchestrator | 2026-04-08 00:47:30 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:30.517415 | orchestrator | 2026-04-08 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:33.544011 | orchestrator | 2026-04-08 00:47:33 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:33.544345 | orchestrator | 2026-04-08 00:47:33 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:47:33.544987 | orchestrator | 2026-04-08 00:47:33 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:33.545771 | orchestrator | 2026-04-08 00:47:33 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:33.546397 | orchestrator | 2026-04-08 00:47:33 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:33.546435 | orchestrator | 2026-04-08 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:36.590775 | orchestrator | 2026-04-08 00:47:36 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:36.592045 | orchestrator | 2026-04-08 00:47:36 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:47:36.594199 | orchestrator | 2026-04-08 00:47:36 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:36.595001 | orchestrator | 2026-04-08 00:47:36 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:36.595635 | orchestrator | 2026-04-08 00:47:36 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:36.595657 | orchestrator | 2026-04-08 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:39.614788 | orchestrator | 2026-04-08 00:47:39 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:39.614997 | orchestrator | 2026-04-08 00:47:39 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:47:39.615885 | orchestrator | 2026-04-08 00:47:39 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:39.616278 | orchestrator | 2026-04-08 00:47:39 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:39.616867 | orchestrator | 2026-04-08 00:47:39 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:39.616890 | orchestrator | 2026-04-08 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:42.661158 | orchestrator | 2026-04-08 00:47:42 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:42.662995 | orchestrator | 2026-04-08 00:47:42 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:47:42.663129 | orchestrator | 2026-04-08 00:47:42 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:42.664665 | orchestrator | 2026-04-08 00:47:42 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:42.666153 | orchestrator | 2026-04-08 00:47:42 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:42.666195 | orchestrator | 2026-04-08 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:45.696725 | orchestrator | 2026-04-08 00:47:45 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:45.701439 | orchestrator | 2026-04-08 00:47:45 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:47:45.701491 | orchestrator | 2026-04-08 00:47:45 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:45.702781 | orchestrator | 2026-04-08 00:47:45 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:45.707030 | orchestrator | 2026-04-08 00:47:45 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:45.707081 | orchestrator | 2026-04-08 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:48.736122 | orchestrator | 2026-04-08 00:47:48 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:48.736584 | orchestrator | 2026-04-08 00:47:48 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:47:48.737291 | orchestrator | 2026-04-08 00:47:48 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:48.740348 | orchestrator | 2026-04-08 00:47:48 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:48.740729 | orchestrator | 2026-04-08 00:47:48 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:48.740871 | orchestrator | 2026-04-08 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:51.784495 | orchestrator | 2026-04-08 00:47:51 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:51.786072 | orchestrator | 2026-04-08 00:47:51 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:47:51.787498 | orchestrator | 2026-04-08 00:47:51 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:51.789773 | orchestrator | 2026-04-08 00:47:51 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:51.791664 | orchestrator | 2026-04-08 00:47:51 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:51.791707 | orchestrator | 2026-04-08 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:54.817716 | orchestrator | 2026-04-08 00:47:54 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:54.819580 | orchestrator | 2026-04-08 00:47:54 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:47:54.820289 | orchestrator | 2026-04-08 00:47:54 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:54.821363 | orchestrator | 2026-04-08 00:47:54 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:54.822302 | orchestrator | 2026-04-08 00:47:54 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:54.822341 | orchestrator | 2026-04-08 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:57.898339 | orchestrator | 2026-04-08 00:47:57 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:47:57.900191 | orchestrator | 2026-04-08 00:47:57 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:47:57.901829 | orchestrator | 2026-04-08 00:47:57 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:47:57.903728 | orchestrator | 2026-04-08 00:47:57 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:47:57.906697 | orchestrator | 2026-04-08 00:47:57 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:47:57.907578 | orchestrator | 2026-04-08 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:00.948749 | orchestrator | 2026-04-08 00:48:00 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:48:00.951382 | orchestrator | 2026-04-08 00:48:00 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:00.952036 | orchestrator | 2026-04-08 00:48:00 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:00.954609 | orchestrator | 2026-04-08 00:48:00 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:00.957743 | orchestrator | 2026-04-08 00:48:00 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:00.957792 | orchestrator | 2026-04-08 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:04.007340 | orchestrator | 2026-04-08 00:48:04 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:48:04.010999 | orchestrator | 2026-04-08 00:48:04 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:04.013019 | orchestrator | 2026-04-08 00:48:04 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:04.019551 | orchestrator | 2026-04-08 00:48:04 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:04.022672 | orchestrator | 2026-04-08 00:48:04 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:04.022754 | orchestrator | 2026-04-08 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:07.079030 | orchestrator | 2026-04-08 00:48:07 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:48:07.079750 | orchestrator | 2026-04-08 00:48:07 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:07.080825 | orchestrator | 2026-04-08 00:48:07 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:07.081970 | orchestrator | 2026-04-08 00:48:07 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:07.083893 | orchestrator | 2026-04-08 00:48:07 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:07.083937 | orchestrator | 2026-04-08 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:10.123241 | orchestrator | 2026-04-08 00:48:10 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:48:10.124271 | orchestrator | 2026-04-08 00:48:10 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:10.126277 | orchestrator | 2026-04-08 00:48:10 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:10.127591 | orchestrator | 2026-04-08 00:48:10 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:10.133885 | orchestrator | 2026-04-08 00:48:10 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:10.133987 | orchestrator | 2026-04-08 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:13.187357 | orchestrator | 2026-04-08 00:48:13 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:48:13.189059 | orchestrator | 2026-04-08 00:48:13 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:13.194509 | orchestrator | 2026-04-08 00:48:13 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:13.197625 | orchestrator | 2026-04-08 00:48:13 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:13.203002 | orchestrator | 2026-04-08 00:48:13 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:13.203116 | orchestrator | 2026-04-08 00:48:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:16.251354 | orchestrator | 2026-04-08 00:48:16 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:48:16.255171 | orchestrator | 2026-04-08 00:48:16 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:16.256762 | orchestrator | 2026-04-08 00:48:16 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:16.260373 | orchestrator | 2026-04-08 00:48:16 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:16.262587 | orchestrator | 2026-04-08 00:48:16 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:16.262647 | orchestrator | 2026-04-08 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:19.294566 | orchestrator | 2026-04-08 00:48:19 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:48:19.296876 | orchestrator | 2026-04-08 00:48:19 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:19.298186 | orchestrator | 2026-04-08 00:48:19 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:19.302382 | orchestrator | 2026-04-08 00:48:19 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:19.302774 | orchestrator | 2026-04-08 00:48:19 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:19.302816 | orchestrator | 2026-04-08 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:22.342277 | orchestrator | 2026-04-08 00:48:22 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:48:22.344366 | orchestrator | 2026-04-08 00:48:22 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:22.346998 | orchestrator | 2026-04-08 00:48:22 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:22.351275 | orchestrator | 2026-04-08 00:48:22 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:22.351337 | orchestrator | 2026-04-08 00:48:22 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:22.351348 | orchestrator | 2026-04-08 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:25.388747 | orchestrator | 2026-04-08 00:48:25 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:48:25.389167 | orchestrator | 2026-04-08 00:48:25 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:25.391936 | orchestrator | 2026-04-08 00:48:25 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:25.392525 | orchestrator | 2026-04-08 00:48:25 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:25.393202 | orchestrator | 2026-04-08 00:48:25 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:25.393316 | orchestrator | 2026-04-08 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:28.445386 | orchestrator | 2026-04-08 00:48:28 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:48:28.445506 | orchestrator | 2026-04-08 00:48:28 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:28.445528 | orchestrator | 2026-04-08 00:48:28 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:28.446160 | orchestrator | 2026-04-08 00:48:28 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:28.447054 | orchestrator | 2026-04-08 00:48:28 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:28.447096 | orchestrator | 2026-04-08 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:31.496931 | orchestrator | 2026-04-08 00:48:31 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:48:31.497238 | orchestrator | 2026-04-08 00:48:31 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:31.499486 | orchestrator | 2026-04-08 00:48:31 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:31.500554 | orchestrator | 2026-04-08 00:48:31 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:31.501682 | orchestrator | 2026-04-08 00:48:31 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:31.501731 | orchestrator | 2026-04-08 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:34.544010 | orchestrator | 2026-04-08 00:48:34 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:48:34.544082 | orchestrator | 2026-04-08 00:48:34 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:34.544641 | orchestrator | 2026-04-08 00:48:34 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:34.545822 | orchestrator | 2026-04-08 00:48:34 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:34.545853 | orchestrator | 2026-04-08 00:48:34 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:34.545860 | orchestrator | 2026-04-08 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:37.732234 | orchestrator | 2026-04-08 00:48:37 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:48:37.732318 | orchestrator | 2026-04-08 00:48:37 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:37.732329 | orchestrator | 2026-04-08 00:48:37 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:37.732337 | orchestrator | 2026-04-08 00:48:37 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:37.732346 | orchestrator | 2026-04-08 00:48:37 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:37.732354 | orchestrator | 2026-04-08 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:40.740115 | orchestrator | 2026-04-08 00:48:40 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:48:40.740322 | orchestrator | 2026-04-08 00:48:40 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:40.741118 | orchestrator | 2026-04-08 00:48:40 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:40.743227 | orchestrator | 2026-04-08 00:48:40 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:40.743390 | orchestrator | 2026-04-08 00:48:40 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:40.743406 | orchestrator | 2026-04-08 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:43.768255 | orchestrator | 2026-04-08 00:48:43 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state STARTED 2026-04-08 00:48:43.768628 | orchestrator | 2026-04-08 00:48:43 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:43.769560 | orchestrator | 2026-04-08 00:48:43 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:43.770559 | orchestrator | 2026-04-08 00:48:43 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:43.770946 | orchestrator | 2026-04-08 00:48:43 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:43.771075 | orchestrator | 2026-04-08 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:46.807742 | orchestrator | 2026-04-08 00:48:46 | INFO  | Task ea06be54-7fcc-4e66-80f8-b9040cb9fd25 is in state SUCCESS 2026-04-08 00:48:46.808608 | orchestrator | 2026-04-08 00:48:46.808642 | orchestrator | 2026-04-08 00:48:46.808653 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-08 00:48:46.808663 | orchestrator | 2026-04-08 00:48:46.808673 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-08 00:48:46.808683 | orchestrator | Wednesday 08 April 2026 00:44:04 +0000 (0:00:00.388) 0:00:00.388 ******* 2026-04-08 00:48:46.808692 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:48:46.808702 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:48:46.808742 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:48:46.808752 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.808760 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.808768 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.808776 | orchestrator | 2026-04-08 00:48:46.808784 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-08 00:48:46.808793 | orchestrator | Wednesday 08 April 2026 00:44:05 +0000 (0:00:00.730) 0:00:01.119 ******* 2026-04-08 00:48:46.808814 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.808829 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.808842 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.808863 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.808877 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.808890 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.808903 | orchestrator | 2026-04-08 00:48:46.808916 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-08 00:48:46.808930 | orchestrator | Wednesday 08 April 2026 00:44:06 +0000 (0:00:00.737) 0:00:01.856 ******* 2026-04-08 00:48:46.808943 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.808956 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.809078 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.809096 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.809109 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.809123 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.809132 | orchestrator | 2026-04-08 00:48:46.809140 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-08 00:48:46.809148 | orchestrator | Wednesday 08 April 2026 00:44:06 +0000 (0:00:00.634) 0:00:02.491 ******* 2026-04-08 00:48:46.809157 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:48:46.809165 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:48:46.809173 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.809196 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:46.809227 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:46.809235 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:48:46.809243 | orchestrator | 2026-04-08 00:48:46.809251 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-08 00:48:46.809259 | orchestrator | Wednesday 08 April 2026 00:44:09 +0000 (0:00:02.705) 0:00:05.197 ******* 2026-04-08 00:48:46.809266 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:48:46.809274 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:48:46.809282 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:48:46.809290 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.809298 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:46.809305 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:46.809313 | orchestrator | 2026-04-08 00:48:46.809321 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-08 00:48:46.809329 | orchestrator | Wednesday 08 April 2026 00:44:10 +0000 (0:00:01.163) 0:00:06.361 ******* 2026-04-08 00:48:46.809337 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:48:46.809344 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:48:46.809352 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.809360 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:46.809380 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:46.809389 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:48:46.809405 | orchestrator | 2026-04-08 00:48:46.809414 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-08 00:48:46.809422 | orchestrator | Wednesday 08 April 2026 00:44:12 +0000 (0:00:01.642) 0:00:08.003 ******* 2026-04-08 00:48:46.809430 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.809438 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.809446 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.809453 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.809461 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.809469 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.809476 | orchestrator | 2026-04-08 00:48:46.809484 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-08 00:48:46.809492 | orchestrator | Wednesday 08 April 2026 00:44:13 +0000 (0:00:01.122) 0:00:09.126 ******* 2026-04-08 00:48:46.809500 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.809508 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.809516 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.809523 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.809531 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.809539 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.809546 | orchestrator | 2026-04-08 00:48:46.809554 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-08 00:48:46.809562 | orchestrator | Wednesday 08 April 2026 00:44:14 +0000 (0:00:00.541) 0:00:09.667 ******* 2026-04-08 00:48:46.809570 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-08 00:48:46.809578 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-08 00:48:46.809586 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.809594 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-08 00:48:46.809602 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-08 00:48:46.809610 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.809617 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-08 00:48:46.809625 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-08 00:48:46.809633 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.809641 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-08 00:48:46.809673 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-08 00:48:46.809688 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.809696 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-08 00:48:46.809704 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-08 00:48:46.809712 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-08 00:48:46.809720 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.809728 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-08 00:48:46.809736 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.809744 | orchestrator | 2026-04-08 00:48:46.809752 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-08 00:48:46.809760 | orchestrator | Wednesday 08 April 2026 00:44:14 +0000 (0:00:00.673) 0:00:10.341 ******* 2026-04-08 00:48:46.809768 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.809775 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.809783 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.809791 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.809799 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.809807 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.809814 | orchestrator | 2026-04-08 00:48:46.809823 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-08 00:48:46.809832 | orchestrator | Wednesday 08 April 2026 00:44:16 +0000 (0:00:01.430) 0:00:11.771 ******* 2026-04-08 00:48:46.809840 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:48:46.809848 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:48:46.809856 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:48:46.809864 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.809872 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.809880 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.809887 | orchestrator | 2026-04-08 00:48:46.809895 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-08 00:48:46.809903 | orchestrator | Wednesday 08 April 2026 00:44:16 +0000 (0:00:00.770) 0:00:12.542 ******* 2026-04-08 00:48:46.809911 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.809919 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:48:46.809927 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:46.809940 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:48:46.809948 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:48:46.809956 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:46.809964 | orchestrator | 2026-04-08 00:48:46.809971 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-08 00:48:46.809998 | orchestrator | Wednesday 08 April 2026 00:44:22 +0000 (0:00:05.925) 0:00:18.467 ******* 2026-04-08 00:48:46.810006 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.810060 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.810071 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.810079 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.810087 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.810095 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.810103 | orchestrator | 2026-04-08 00:48:46.810111 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-08 00:48:46.810120 | orchestrator | Wednesday 08 April 2026 00:44:24 +0000 (0:00:01.337) 0:00:19.805 ******* 2026-04-08 00:48:46.810128 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.810136 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.810143 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.810151 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.810159 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.810167 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.810174 | orchestrator | 2026-04-08 00:48:46.810183 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-08 00:48:46.810202 | orchestrator | Wednesday 08 April 2026 00:44:27 +0000 (0:00:03.182) 0:00:22.988 ******* 2026-04-08 00:48:46.810210 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.810218 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.810226 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.810234 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.810242 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.810250 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.810257 | orchestrator | 2026-04-08 00:48:46.810265 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-08 00:48:46.810273 | orchestrator | Wednesday 08 April 2026 00:44:29 +0000 (0:00:02.116) 0:00:25.104 ******* 2026-04-08 00:48:46.810282 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-08 00:48:46.810290 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-08 00:48:46.810298 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-08 00:48:46.810306 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-08 00:48:46.810314 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.810322 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-08 00:48:46.810330 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-08 00:48:46.810338 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.810345 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-08 00:48:46.810353 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-08 00:48:46.810361 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.810369 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-08 00:48:46.810377 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-08 00:48:46.810384 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.810392 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.810400 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-08 00:48:46.810408 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-08 00:48:46.810416 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.810424 | orchestrator | 2026-04-08 00:48:46.810432 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-08 00:48:46.810446 | orchestrator | Wednesday 08 April 2026 00:44:30 +0000 (0:00:01.258) 0:00:26.363 ******* 2026-04-08 00:48:46.810454 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.810462 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.810470 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.810478 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.810486 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.810494 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.810502 | orchestrator | 2026-04-08 00:48:46.810510 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-08 00:48:46.810518 | orchestrator | Wednesday 08 April 2026 00:44:31 +0000 (0:00:00.933) 0:00:27.296 ******* 2026-04-08 00:48:46.810525 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.810533 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.810541 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.810549 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.810557 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.810565 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.810572 | orchestrator | 2026-04-08 00:48:46.810580 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-08 00:48:46.810588 | orchestrator | 2026-04-08 00:48:46.810596 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-08 00:48:46.810604 | orchestrator | Wednesday 08 April 2026 00:44:33 +0000 (0:00:01.379) 0:00:28.676 ******* 2026-04-08 00:48:46.810612 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.810620 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.810633 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.810641 | orchestrator | 2026-04-08 00:48:46.810649 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-08 00:48:46.810657 | orchestrator | Wednesday 08 April 2026 00:44:34 +0000 (0:00:01.416) 0:00:30.093 ******* 2026-04-08 00:48:46.810665 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.810673 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.810681 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.810689 | orchestrator | 2026-04-08 00:48:46.810696 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-08 00:48:46.810704 | orchestrator | Wednesday 08 April 2026 00:44:35 +0000 (0:00:01.036) 0:00:31.130 ******* 2026-04-08 00:48:46.810712 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.810720 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.810733 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.810741 | orchestrator | 2026-04-08 00:48:46.810749 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-08 00:48:46.810757 | orchestrator | Wednesday 08 April 2026 00:44:36 +0000 (0:00:00.845) 0:00:31.975 ******* 2026-04-08 00:48:46.810765 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.810772 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.810780 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.810788 | orchestrator | 2026-04-08 00:48:46.810796 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-08 00:48:46.810804 | orchestrator | Wednesday 08 April 2026 00:44:37 +0000 (0:00:01.304) 0:00:33.280 ******* 2026-04-08 00:48:46.810812 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.810820 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.810827 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.810835 | orchestrator | 2026-04-08 00:48:46.810843 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-08 00:48:46.810851 | orchestrator | Wednesday 08 April 2026 00:44:38 +0000 (0:00:00.327) 0:00:33.607 ******* 2026-04-08 00:48:46.810859 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:46.810867 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.810875 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:46.810883 | orchestrator | 2026-04-08 00:48:46.810891 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-08 00:48:46.810898 | orchestrator | Wednesday 08 April 2026 00:44:38 +0000 (0:00:00.813) 0:00:34.420 ******* 2026-04-08 00:48:46.810906 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.810914 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:46.810922 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:46.810930 | orchestrator | 2026-04-08 00:48:46.810938 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-08 00:48:46.810945 | orchestrator | Wednesday 08 April 2026 00:44:40 +0000 (0:00:01.506) 0:00:35.927 ******* 2026-04-08 00:48:46.810953 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:46.810962 | orchestrator | 2026-04-08 00:48:46.810969 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-08 00:48:46.811004 | orchestrator | Wednesday 08 April 2026 00:44:41 +0000 (0:00:01.077) 0:00:37.005 ******* 2026-04-08 00:48:46.811012 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.811020 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.811028 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.811035 | orchestrator | 2026-04-08 00:48:46.811043 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-08 00:48:46.811051 | orchestrator | Wednesday 08 April 2026 00:44:43 +0000 (0:00:02.556) 0:00:39.561 ******* 2026-04-08 00:48:46.811059 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.811067 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.811075 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.811083 | orchestrator | 2026-04-08 00:48:46.811091 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-08 00:48:46.811105 | orchestrator | Wednesday 08 April 2026 00:44:44 +0000 (0:00:00.921) 0:00:40.483 ******* 2026-04-08 00:48:46.811113 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.811121 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.811129 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.811137 | orchestrator | 2026-04-08 00:48:46.811144 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-08 00:48:46.811152 | orchestrator | Wednesday 08 April 2026 00:44:46 +0000 (0:00:01.919) 0:00:42.403 ******* 2026-04-08 00:48:46.811160 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.811168 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.811176 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.811184 | orchestrator | 2026-04-08 00:48:46.811192 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-08 00:48:46.811206 | orchestrator | Wednesday 08 April 2026 00:44:48 +0000 (0:00:01.469) 0:00:43.873 ******* 2026-04-08 00:48:46.811214 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.811222 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.811230 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.811238 | orchestrator | 2026-04-08 00:48:46.811246 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-08 00:48:46.811253 | orchestrator | Wednesday 08 April 2026 00:44:48 +0000 (0:00:00.538) 0:00:44.412 ******* 2026-04-08 00:48:46.811261 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.811269 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.811277 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.811285 | orchestrator | 2026-04-08 00:48:46.811293 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-08 00:48:46.811301 | orchestrator | Wednesday 08 April 2026 00:44:49 +0000 (0:00:00.485) 0:00:44.897 ******* 2026-04-08 00:48:46.811309 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.811317 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:46.811324 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:46.811332 | orchestrator | 2026-04-08 00:48:46.811340 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-08 00:48:46.811348 | orchestrator | Wednesday 08 April 2026 00:44:52 +0000 (0:00:02.807) 0:00:47.705 ******* 2026-04-08 00:48:46.811356 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.811364 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.811372 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.811380 | orchestrator | 2026-04-08 00:48:46.811388 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-08 00:48:46.811396 | orchestrator | Wednesday 08 April 2026 00:44:55 +0000 (0:00:03.037) 0:00:50.746 ******* 2026-04-08 00:48:46.811404 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.811412 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.811420 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.811427 | orchestrator | 2026-04-08 00:48:46.811435 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-08 00:48:46.811443 | orchestrator | Wednesday 08 April 2026 00:44:55 +0000 (0:00:00.669) 0:00:51.416 ******* 2026-04-08 00:48:46.811456 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-08 00:48:46.811465 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-08 00:48:46.811473 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-08 00:48:46.811481 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-08 00:48:46.811489 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-08 00:48:46.811503 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-08 00:48:46.811511 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-08 00:48:46.811518 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-08 00:48:46.811526 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-08 00:48:46.811534 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-08 00:48:46.811542 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-08 00:48:46.811550 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-08 00:48:46.811558 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-08 00:48:46.811566 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-08 00:48:46.811574 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-08 00:48:46.811582 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.811590 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.811598 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.811606 | orchestrator | 2026-04-08 00:48:46.811614 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-08 00:48:46.811622 | orchestrator | Wednesday 08 April 2026 00:45:49 +0000 (0:00:53.950) 0:01:45.367 ******* 2026-04-08 00:48:46.811630 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.811638 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.811646 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.811654 | orchestrator | 2026-04-08 00:48:46.811662 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-08 00:48:46.811674 | orchestrator | Wednesday 08 April 2026 00:45:50 +0000 (0:00:00.357) 0:01:45.724 ******* 2026-04-08 00:48:46.811682 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.811690 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:46.811698 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:46.811706 | orchestrator | 2026-04-08 00:48:46.811714 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-08 00:48:46.811722 | orchestrator | Wednesday 08 April 2026 00:45:51 +0000 (0:00:01.352) 0:01:47.077 ******* 2026-04-08 00:48:46.811730 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.811738 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:46.811746 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:46.811753 | orchestrator | 2026-04-08 00:48:46.811762 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-08 00:48:46.811770 | orchestrator | Wednesday 08 April 2026 00:45:52 +0000 (0:00:01.209) 0:01:48.286 ******* 2026-04-08 00:48:46.811777 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:46.811785 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.811793 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:46.811801 | orchestrator | 2026-04-08 00:48:46.811809 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-08 00:48:46.811817 | orchestrator | Wednesday 08 April 2026 00:46:18 +0000 (0:00:26.303) 0:02:14.590 ******* 2026-04-08 00:48:46.811825 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.811832 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.811845 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.811853 | orchestrator | 2026-04-08 00:48:46.811861 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-08 00:48:46.811869 | orchestrator | Wednesday 08 April 2026 00:46:19 +0000 (0:00:00.687) 0:02:15.277 ******* 2026-04-08 00:48:46.811877 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.811885 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.811893 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.811901 | orchestrator | 2026-04-08 00:48:46.811908 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-08 00:48:46.811917 | orchestrator | Wednesday 08 April 2026 00:46:20 +0000 (0:00:00.894) 0:02:16.172 ******* 2026-04-08 00:48:46.811924 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.811932 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:46.811940 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:46.811948 | orchestrator | 2026-04-08 00:48:46.811960 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-08 00:48:46.811968 | orchestrator | Wednesday 08 April 2026 00:46:21 +0000 (0:00:00.623) 0:02:16.796 ******* 2026-04-08 00:48:46.812019 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.812028 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.812036 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.812044 | orchestrator | 2026-04-08 00:48:46.812052 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-08 00:48:46.812060 | orchestrator | Wednesday 08 April 2026 00:46:21 +0000 (0:00:00.593) 0:02:17.389 ******* 2026-04-08 00:48:46.812068 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.812075 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.812083 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.812091 | orchestrator | 2026-04-08 00:48:46.812099 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-08 00:48:46.812107 | orchestrator | Wednesday 08 April 2026 00:46:22 +0000 (0:00:00.274) 0:02:17.664 ******* 2026-04-08 00:48:46.812116 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.812123 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:46.812131 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:46.812139 | orchestrator | 2026-04-08 00:48:46.812147 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-08 00:48:46.812155 | orchestrator | Wednesday 08 April 2026 00:46:22 +0000 (0:00:00.747) 0:02:18.412 ******* 2026-04-08 00:48:46.812163 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.812171 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:46.812179 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:46.812187 | orchestrator | 2026-04-08 00:48:46.812195 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-08 00:48:46.812203 | orchestrator | Wednesday 08 April 2026 00:46:23 +0000 (0:00:00.604) 0:02:19.016 ******* 2026-04-08 00:48:46.812211 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.812219 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:46.812227 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:46.812235 | orchestrator | 2026-04-08 00:48:46.812243 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-08 00:48:46.812251 | orchestrator | Wednesday 08 April 2026 00:46:24 +0000 (0:00:01.039) 0:02:20.056 ******* 2026-04-08 00:48:46.812259 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:46.812266 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:46.812274 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:46.812282 | orchestrator | 2026-04-08 00:48:46.812290 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-08 00:48:46.812298 | orchestrator | Wednesday 08 April 2026 00:46:25 +0000 (0:00:00.871) 0:02:20.927 ******* 2026-04-08 00:48:46.812306 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.812314 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.812322 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.812330 | orchestrator | 2026-04-08 00:48:46.812344 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-08 00:48:46.812352 | orchestrator | Wednesday 08 April 2026 00:46:25 +0000 (0:00:00.468) 0:02:21.396 ******* 2026-04-08 00:48:46.812360 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.812368 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.812376 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.812384 | orchestrator | 2026-04-08 00:48:46.812392 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-08 00:48:46.812400 | orchestrator | Wednesday 08 April 2026 00:46:26 +0000 (0:00:00.269) 0:02:21.666 ******* 2026-04-08 00:48:46.812408 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.812416 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.812424 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.812432 | orchestrator | 2026-04-08 00:48:46.812441 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-08 00:48:46.812449 | orchestrator | Wednesday 08 April 2026 00:46:26 +0000 (0:00:00.689) 0:02:22.355 ******* 2026-04-08 00:48:46.812457 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.812471 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.812479 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.812487 | orchestrator | 2026-04-08 00:48:46.812495 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-08 00:48:46.812503 | orchestrator | Wednesday 08 April 2026 00:46:27 +0000 (0:00:00.641) 0:02:22.996 ******* 2026-04-08 00:48:46.812511 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-08 00:48:46.812519 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-08 00:48:46.812528 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-08 00:48:46.812536 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-08 00:48:46.812544 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-08 00:48:46.812552 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-08 00:48:46.812560 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-08 00:48:46.812568 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-08 00:48:46.812575 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-08 00:48:46.812583 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-08 00:48:46.812591 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-08 00:48:46.812599 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-08 00:48:46.812619 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-08 00:48:46.812627 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-08 00:48:46.812635 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-08 00:48:46.812643 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-08 00:48:46.812651 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-08 00:48:46.812659 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-08 00:48:46.812667 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-08 00:48:46.812675 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-08 00:48:46.812690 | orchestrator | 2026-04-08 00:48:46.812698 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-08 00:48:46.812706 | orchestrator | 2026-04-08 00:48:46.812714 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-08 00:48:46.812722 | orchestrator | Wednesday 08 April 2026 00:46:30 +0000 (0:00:03.355) 0:02:26.352 ******* 2026-04-08 00:48:46.812730 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:48:46.812738 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:48:46.812746 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:48:46.812754 | orchestrator | 2026-04-08 00:48:46.812762 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-08 00:48:46.812770 | orchestrator | Wednesday 08 April 2026 00:46:31 +0000 (0:00:00.273) 0:02:26.625 ******* 2026-04-08 00:48:46.812777 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:48:46.812785 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:48:46.812793 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:48:46.812801 | orchestrator | 2026-04-08 00:48:46.812814 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-08 00:48:46.812828 | orchestrator | Wednesday 08 April 2026 00:46:32 +0000 (0:00:01.688) 0:02:28.314 ******* 2026-04-08 00:48:46.812843 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:48:46.812856 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:48:46.812868 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:48:46.812880 | orchestrator | 2026-04-08 00:48:46.812893 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-08 00:48:46.812906 | orchestrator | Wednesday 08 April 2026 00:46:33 +0000 (0:00:00.460) 0:02:28.774 ******* 2026-04-08 00:48:46.812919 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:48:46.812933 | orchestrator | 2026-04-08 00:48:46.812946 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-08 00:48:46.812959 | orchestrator | Wednesday 08 April 2026 00:46:33 +0000 (0:00:00.435) 0:02:29.209 ******* 2026-04-08 00:48:46.812991 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.813006 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.813018 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.813026 | orchestrator | 2026-04-08 00:48:46.813034 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-08 00:48:46.813042 | orchestrator | Wednesday 08 April 2026 00:46:33 +0000 (0:00:00.278) 0:02:29.488 ******* 2026-04-08 00:48:46.813049 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.813057 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.813065 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.813073 | orchestrator | 2026-04-08 00:48:46.813081 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-08 00:48:46.813096 | orchestrator | Wednesday 08 April 2026 00:46:34 +0000 (0:00:00.642) 0:02:30.131 ******* 2026-04-08 00:48:46.813104 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.813112 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.813119 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.813127 | orchestrator | 2026-04-08 00:48:46.813135 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-08 00:48:46.813143 | orchestrator | Wednesday 08 April 2026 00:46:35 +0000 (0:00:00.485) 0:02:30.616 ******* 2026-04-08 00:48:46.813150 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:48:46.813158 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:48:46.813166 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:48:46.813174 | orchestrator | 2026-04-08 00:48:46.813181 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-08 00:48:46.813189 | orchestrator | Wednesday 08 April 2026 00:46:35 +0000 (0:00:00.739) 0:02:31.355 ******* 2026-04-08 00:48:46.813224 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:48:46.813232 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:48:46.813249 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:48:46.813257 | orchestrator | 2026-04-08 00:48:46.813265 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-08 00:48:46.813273 | orchestrator | Wednesday 08 April 2026 00:46:37 +0000 (0:00:01.342) 0:02:32.698 ******* 2026-04-08 00:48:46.813281 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:48:46.813289 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:48:46.813297 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:48:46.813305 | orchestrator | 2026-04-08 00:48:46.813312 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-08 00:48:46.813320 | orchestrator | Wednesday 08 April 2026 00:46:38 +0000 (0:00:01.811) 0:02:34.509 ******* 2026-04-08 00:48:46.813328 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:48:46.813336 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:48:46.813344 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:48:46.813351 | orchestrator | 2026-04-08 00:48:46.813359 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-08 00:48:46.813367 | orchestrator | 2026-04-08 00:48:46.813375 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-08 00:48:46.813383 | orchestrator | Wednesday 08 April 2026 00:46:49 +0000 (0:00:11.085) 0:02:45.595 ******* 2026-04-08 00:48:46.813391 | orchestrator | ok: [testbed-manager] 2026-04-08 00:48:46.813399 | orchestrator | 2026-04-08 00:48:46.813412 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-08 00:48:46.813421 | orchestrator | Wednesday 08 April 2026 00:46:50 +0000 (0:00:00.860) 0:02:46.455 ******* 2026-04-08 00:48:46.813429 | orchestrator | changed: [testbed-manager] 2026-04-08 00:48:46.813436 | orchestrator | 2026-04-08 00:48:46.813444 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-08 00:48:46.813452 | orchestrator | Wednesday 08 April 2026 00:46:51 +0000 (0:00:00.463) 0:02:46.919 ******* 2026-04-08 00:48:46.813460 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-08 00:48:46.813468 | orchestrator | 2026-04-08 00:48:46.813476 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-08 00:48:46.813484 | orchestrator | Wednesday 08 April 2026 00:46:51 +0000 (0:00:00.587) 0:02:47.506 ******* 2026-04-08 00:48:46.813491 | orchestrator | changed: [testbed-manager] 2026-04-08 00:48:46.813499 | orchestrator | 2026-04-08 00:48:46.813507 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-08 00:48:46.813515 | orchestrator | Wednesday 08 April 2026 00:46:52 +0000 (0:00:00.769) 0:02:48.276 ******* 2026-04-08 00:48:46.813523 | orchestrator | changed: [testbed-manager] 2026-04-08 00:48:46.813531 | orchestrator | 2026-04-08 00:48:46.813539 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-08 00:48:46.813546 | orchestrator | Wednesday 08 April 2026 00:46:53 +0000 (0:00:01.104) 0:02:49.381 ******* 2026-04-08 00:48:46.813555 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-08 00:48:46.813563 | orchestrator | 2026-04-08 00:48:46.813571 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-08 00:48:46.813578 | orchestrator | Wednesday 08 April 2026 00:46:55 +0000 (0:00:01.618) 0:02:51.000 ******* 2026-04-08 00:48:46.813586 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-08 00:48:46.813594 | orchestrator | 2026-04-08 00:48:46.813602 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-08 00:48:46.813610 | orchestrator | Wednesday 08 April 2026 00:46:56 +0000 (0:00:00.749) 0:02:51.750 ******* 2026-04-08 00:48:46.813618 | orchestrator | changed: [testbed-manager] 2026-04-08 00:48:46.813626 | orchestrator | 2026-04-08 00:48:46.813634 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-08 00:48:46.813642 | orchestrator | Wednesday 08 April 2026 00:46:56 +0000 (0:00:00.360) 0:02:52.110 ******* 2026-04-08 00:48:46.813649 | orchestrator | changed: [testbed-manager] 2026-04-08 00:48:46.813657 | orchestrator | 2026-04-08 00:48:46.813665 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-08 00:48:46.813678 | orchestrator | 2026-04-08 00:48:46.813686 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-08 00:48:46.813694 | orchestrator | Wednesday 08 April 2026 00:46:56 +0000 (0:00:00.460) 0:02:52.571 ******* 2026-04-08 00:48:46.813702 | orchestrator | ok: [testbed-manager] 2026-04-08 00:48:46.813710 | orchestrator | 2026-04-08 00:48:46.813718 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-08 00:48:46.813725 | orchestrator | Wednesday 08 April 2026 00:46:57 +0000 (0:00:00.116) 0:02:52.687 ******* 2026-04-08 00:48:46.813733 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-08 00:48:46.813741 | orchestrator | 2026-04-08 00:48:46.813749 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-08 00:48:46.813757 | orchestrator | Wednesday 08 April 2026 00:46:57 +0000 (0:00:00.225) 0:02:52.913 ******* 2026-04-08 00:48:46.813765 | orchestrator | ok: [testbed-manager] 2026-04-08 00:48:46.813772 | orchestrator | 2026-04-08 00:48:46.813780 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-08 00:48:46.813788 | orchestrator | Wednesday 08 April 2026 00:46:58 +0000 (0:00:00.880) 0:02:53.793 ******* 2026-04-08 00:48:46.813801 | orchestrator | ok: [testbed-manager] 2026-04-08 00:48:46.813810 | orchestrator | 2026-04-08 00:48:46.813818 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-08 00:48:46.813825 | orchestrator | Wednesday 08 April 2026 00:46:59 +0000 (0:00:01.535) 0:02:55.328 ******* 2026-04-08 00:48:46.813834 | orchestrator | changed: [testbed-manager] 2026-04-08 00:48:46.813841 | orchestrator | 2026-04-08 00:48:46.813849 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-08 00:48:46.813858 | orchestrator | Wednesday 08 April 2026 00:47:00 +0000 (0:00:00.775) 0:02:56.104 ******* 2026-04-08 00:48:46.813872 | orchestrator | ok: [testbed-manager] 2026-04-08 00:48:46.813885 | orchestrator | 2026-04-08 00:48:46.813898 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-08 00:48:46.813911 | orchestrator | Wednesday 08 April 2026 00:47:00 +0000 (0:00:00.372) 0:02:56.476 ******* 2026-04-08 00:48:46.813922 | orchestrator | changed: [testbed-manager] 2026-04-08 00:48:46.813946 | orchestrator | 2026-04-08 00:48:46.813958 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-08 00:48:46.813969 | orchestrator | Wednesday 08 April 2026 00:47:08 +0000 (0:00:07.807) 0:03:04.283 ******* 2026-04-08 00:48:46.814104 | orchestrator | changed: [testbed-manager] 2026-04-08 00:48:46.814120 | orchestrator | 2026-04-08 00:48:46.814133 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-08 00:48:46.814146 | orchestrator | Wednesday 08 April 2026 00:47:22 +0000 (0:00:13.559) 0:03:17.843 ******* 2026-04-08 00:48:46.814157 | orchestrator | ok: [testbed-manager] 2026-04-08 00:48:46.814165 | orchestrator | 2026-04-08 00:48:46.814173 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-08 00:48:46.814180 | orchestrator | 2026-04-08 00:48:46.814188 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-08 00:48:46.814196 | orchestrator | Wednesday 08 April 2026 00:47:22 +0000 (0:00:00.587) 0:03:18.431 ******* 2026-04-08 00:48:46.814204 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.814212 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.814220 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.814228 | orchestrator | 2026-04-08 00:48:46.814236 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-08 00:48:46.814251 | orchestrator | Wednesday 08 April 2026 00:47:23 +0000 (0:00:00.630) 0:03:19.061 ******* 2026-04-08 00:48:46.814259 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.814267 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.814274 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.814282 | orchestrator | 2026-04-08 00:48:46.814290 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-08 00:48:46.814307 | orchestrator | Wednesday 08 April 2026 00:47:23 +0000 (0:00:00.411) 0:03:19.473 ******* 2026-04-08 00:48:46.814315 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:46.814323 | orchestrator | 2026-04-08 00:48:46.814330 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-08 00:48:46.814336 | orchestrator | Wednesday 08 April 2026 00:47:24 +0000 (0:00:00.707) 0:03:20.180 ******* 2026-04-08 00:48:46.814343 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-08 00:48:46.814350 | orchestrator | 2026-04-08 00:48:46.814356 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-08 00:48:46.814363 | orchestrator | Wednesday 08 April 2026 00:47:25 +0000 (0:00:01.072) 0:03:21.253 ******* 2026-04-08 00:48:46.814370 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:48:46.814376 | orchestrator | 2026-04-08 00:48:46.814383 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-08 00:48:46.814389 | orchestrator | Wednesday 08 April 2026 00:47:26 +0000 (0:00:00.957) 0:03:22.211 ******* 2026-04-08 00:48:46.814396 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.814403 | orchestrator | 2026-04-08 00:48:46.814409 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-08 00:48:46.814416 | orchestrator | Wednesday 08 April 2026 00:47:26 +0000 (0:00:00.141) 0:03:22.352 ******* 2026-04-08 00:48:46.814422 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:48:46.814429 | orchestrator | 2026-04-08 00:48:46.814436 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-08 00:48:46.814442 | orchestrator | Wednesday 08 April 2026 00:47:28 +0000 (0:00:01.295) 0:03:23.648 ******* 2026-04-08 00:48:46.814449 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.814455 | orchestrator | 2026-04-08 00:48:46.814462 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-08 00:48:46.814469 | orchestrator | Wednesday 08 April 2026 00:47:28 +0000 (0:00:00.116) 0:03:23.765 ******* 2026-04-08 00:48:46.814475 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.814482 | orchestrator | 2026-04-08 00:48:46.814488 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-08 00:48:46.814495 | orchestrator | Wednesday 08 April 2026 00:47:28 +0000 (0:00:00.144) 0:03:23.910 ******* 2026-04-08 00:48:46.814501 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.814508 | orchestrator | 2026-04-08 00:48:46.814515 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-08 00:48:46.814521 | orchestrator | Wednesday 08 April 2026 00:47:28 +0000 (0:00:00.159) 0:03:24.069 ******* 2026-04-08 00:48:46.814528 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.814534 | orchestrator | 2026-04-08 00:48:46.814541 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-08 00:48:46.814548 | orchestrator | Wednesday 08 April 2026 00:47:28 +0000 (0:00:00.149) 0:03:24.219 ******* 2026-04-08 00:48:46.814554 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-08 00:48:46.814561 | orchestrator | 2026-04-08 00:48:46.814567 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-08 00:48:46.814574 | orchestrator | Wednesday 08 April 2026 00:47:33 +0000 (0:00:04.957) 0:03:29.177 ******* 2026-04-08 00:48:46.814581 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-08 00:48:46.814594 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-08 00:48:46.814602 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-08 00:48:46.814609 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-08 00:48:46.814616 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-08 00:48:46.814622 | orchestrator | 2026-04-08 00:48:46.814629 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-08 00:48:46.814642 | orchestrator | Wednesday 08 April 2026 00:48:16 +0000 (0:00:43.385) 0:04:12.563 ******* 2026-04-08 00:48:46.814663 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:48:46.814670 | orchestrator | 2026-04-08 00:48:46.814677 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-08 00:48:46.814683 | orchestrator | Wednesday 08 April 2026 00:48:18 +0000 (0:00:01.121) 0:04:13.684 ******* 2026-04-08 00:48:46.814690 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-08 00:48:46.814696 | orchestrator | 2026-04-08 00:48:46.814703 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-08 00:48:46.814710 | orchestrator | Wednesday 08 April 2026 00:48:19 +0000 (0:00:01.504) 0:04:15.188 ******* 2026-04-08 00:48:46.814716 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-08 00:48:46.814723 | orchestrator | 2026-04-08 00:48:46.814730 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-08 00:48:46.814736 | orchestrator | Wednesday 08 April 2026 00:48:20 +0000 (0:00:01.122) 0:04:16.311 ******* 2026-04-08 00:48:46.814743 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.814749 | orchestrator | 2026-04-08 00:48:46.814756 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-08 00:48:46.814763 | orchestrator | Wednesday 08 April 2026 00:48:20 +0000 (0:00:00.117) 0:04:16.429 ******* 2026-04-08 00:48:46.814769 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-08 00:48:46.814776 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-08 00:48:46.814783 | orchestrator | 2026-04-08 00:48:46.814793 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-08 00:48:46.814800 | orchestrator | Wednesday 08 April 2026 00:48:22 +0000 (0:00:01.787) 0:04:18.216 ******* 2026-04-08 00:48:46.814807 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.814813 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.814820 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.814827 | orchestrator | 2026-04-08 00:48:46.814833 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-08 00:48:46.814840 | orchestrator | Wednesday 08 April 2026 00:48:23 +0000 (0:00:00.471) 0:04:18.688 ******* 2026-04-08 00:48:46.814847 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.814853 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.814860 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.814866 | orchestrator | 2026-04-08 00:48:46.814873 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-08 00:48:46.814880 | orchestrator | 2026-04-08 00:48:46.814886 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-08 00:48:46.814893 | orchestrator | Wednesday 08 April 2026 00:48:23 +0000 (0:00:00.832) 0:04:19.520 ******* 2026-04-08 00:48:46.814899 | orchestrator | ok: [testbed-manager] 2026-04-08 00:48:46.814906 | orchestrator | 2026-04-08 00:48:46.814913 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-08 00:48:46.814919 | orchestrator | Wednesday 08 April 2026 00:48:24 +0000 (0:00:00.161) 0:04:19.681 ******* 2026-04-08 00:48:46.814926 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-08 00:48:46.814932 | orchestrator | 2026-04-08 00:48:46.814939 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-08 00:48:46.814945 | orchestrator | Wednesday 08 April 2026 00:48:24 +0000 (0:00:00.395) 0:04:20.076 ******* 2026-04-08 00:48:46.814952 | orchestrator | changed: [testbed-manager] 2026-04-08 00:48:46.814959 | orchestrator | 2026-04-08 00:48:46.814965 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-08 00:48:46.814972 | orchestrator | 2026-04-08 00:48:46.814993 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-08 00:48:46.815000 | orchestrator | Wednesday 08 April 2026 00:48:29 +0000 (0:00:05.222) 0:04:25.299 ******* 2026-04-08 00:48:46.815012 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:48:46.815019 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:48:46.815025 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:48:46.815032 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:46.815039 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:46.815045 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:46.815052 | orchestrator | 2026-04-08 00:48:46.815058 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-08 00:48:46.815065 | orchestrator | Wednesday 08 April 2026 00:48:30 +0000 (0:00:00.724) 0:04:26.023 ******* 2026-04-08 00:48:46.815072 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-08 00:48:46.815078 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-08 00:48:46.815085 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-08 00:48:46.815092 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-08 00:48:46.815098 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-08 00:48:46.815105 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-08 00:48:46.815111 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-08 00:48:46.815118 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-08 00:48:46.815130 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-08 00:48:46.815137 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-08 00:48:46.815143 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-08 00:48:46.815150 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-08 00:48:46.815157 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-08 00:48:46.815163 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-08 00:48:46.815170 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-08 00:48:46.815176 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-08 00:48:46.815183 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-08 00:48:46.815190 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-08 00:48:46.815196 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-08 00:48:46.815203 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-08 00:48:46.815210 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-08 00:48:46.815216 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-08 00:48:46.815223 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-08 00:48:46.815229 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-08 00:48:46.815236 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-08 00:48:46.815243 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-08 00:48:46.815250 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-08 00:48:46.815256 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-08 00:48:46.815263 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-08 00:48:46.815275 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-08 00:48:46.815282 | orchestrator | 2026-04-08 00:48:46.815289 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-08 00:48:46.815295 | orchestrator | Wednesday 08 April 2026 00:48:43 +0000 (0:00:13.098) 0:04:39.121 ******* 2026-04-08 00:48:46.815302 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.815308 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.815315 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.815322 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.815328 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.815335 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.815342 | orchestrator | 2026-04-08 00:48:46.815348 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-08 00:48:46.815355 | orchestrator | Wednesday 08 April 2026 00:48:43 +0000 (0:00:00.443) 0:04:39.565 ******* 2026-04-08 00:48:46.815362 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:48:46.815869 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:48:46.815884 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:48:46.815891 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:46.815897 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:46.815904 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:46.815911 | orchestrator | 2026-04-08 00:48:46.815918 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:48:46.815925 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:48:46.815938 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-08 00:48:46.815945 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-08 00:48:46.815952 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-08 00:48:46.815959 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-08 00:48:46.815965 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-08 00:48:46.816020 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-08 00:48:46.816029 | orchestrator | 2026-04-08 00:48:46.816036 | orchestrator | 2026-04-08 00:48:46.816043 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:48:46.816058 | orchestrator | Wednesday 08 April 2026 00:48:44 +0000 (0:00:00.485) 0:04:40.050 ******* 2026-04-08 00:48:46.816065 | orchestrator | =============================================================================== 2026-04-08 00:48:46.816071 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.95s 2026-04-08 00:48:46.816079 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 43.39s 2026-04-08 00:48:46.816086 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.30s 2026-04-08 00:48:46.816093 | orchestrator | kubectl : Install required packages ------------------------------------ 13.56s 2026-04-08 00:48:46.816099 | orchestrator | Manage labels ---------------------------------------------------------- 13.10s 2026-04-08 00:48:46.816106 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.09s 2026-04-08 00:48:46.816113 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.81s 2026-04-08 00:48:46.816119 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.93s 2026-04-08 00:48:46.816133 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.22s 2026-04-08 00:48:46.816140 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.96s 2026-04-08 00:48:46.816146 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.36s 2026-04-08 00:48:46.816153 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.18s 2026-04-08 00:48:46.816160 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.04s 2026-04-08 00:48:46.816167 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.81s 2026-04-08 00:48:46.816173 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.71s 2026-04-08 00:48:46.816180 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.56s 2026-04-08 00:48:46.816187 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 2.12s 2026-04-08 00:48:46.816193 | orchestrator | k3s_server : Download vip rbac manifest to first master ----------------- 1.92s 2026-04-08 00:48:46.816200 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.81s 2026-04-08 00:48:46.816207 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.79s 2026-04-08 00:48:46.816213 | orchestrator | 2026-04-08 00:48:46 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:46.816220 | orchestrator | 2026-04-08 00:48:46 | INFO  | Task c5d5ddbe-25e3-47ec-91c1-d43ab8ab3050 is in state STARTED 2026-04-08 00:48:46.816227 | orchestrator | 2026-04-08 00:48:46 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:46.816234 | orchestrator | 2026-04-08 00:48:46 | INFO  | Task 57efc3d0-605d-47ea-844c-d61d53943c30 is in state STARTED 2026-04-08 00:48:46.816240 | orchestrator | 2026-04-08 00:48:46 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:46.816247 | orchestrator | 2026-04-08 00:48:46 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:46.816254 | orchestrator | 2026-04-08 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:49.872443 | orchestrator | 2026-04-08 00:48:49 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:49.881583 | orchestrator | 2026-04-08 00:48:49 | INFO  | Task c5d5ddbe-25e3-47ec-91c1-d43ab8ab3050 is in state STARTED 2026-04-08 00:48:49.883443 | orchestrator | 2026-04-08 00:48:49 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:49.889575 | orchestrator | 2026-04-08 00:48:49 | INFO  | Task 57efc3d0-605d-47ea-844c-d61d53943c30 is in state STARTED 2026-04-08 00:48:49.892405 | orchestrator | 2026-04-08 00:48:49 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:49.893540 | orchestrator | 2026-04-08 00:48:49 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:49.893591 | orchestrator | 2026-04-08 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:52.925278 | orchestrator | 2026-04-08 00:48:52 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:52.926353 | orchestrator | 2026-04-08 00:48:52 | INFO  | Task c5d5ddbe-25e3-47ec-91c1-d43ab8ab3050 is in state STARTED 2026-04-08 00:48:52.927566 | orchestrator | 2026-04-08 00:48:52 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:52.928575 | orchestrator | 2026-04-08 00:48:52 | INFO  | Task 57efc3d0-605d-47ea-844c-d61d53943c30 is in state SUCCESS 2026-04-08 00:48:52.929722 | orchestrator | 2026-04-08 00:48:52 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:52.931120 | orchestrator | 2026-04-08 00:48:52 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:52.931180 | orchestrator | 2026-04-08 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:55.971893 | orchestrator | 2026-04-08 00:48:55 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:55.972863 | orchestrator | 2026-04-08 00:48:55 | INFO  | Task c5d5ddbe-25e3-47ec-91c1-d43ab8ab3050 is in state SUCCESS 2026-04-08 00:48:55.973003 | orchestrator | 2026-04-08 00:48:55 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:55.975893 | orchestrator | 2026-04-08 00:48:55 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:55.975964 | orchestrator | 2026-04-08 00:48:55 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:55.975978 | orchestrator | 2026-04-08 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:59.020620 | orchestrator | 2026-04-08 00:48:59 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:48:59.021547 | orchestrator | 2026-04-08 00:48:59 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:48:59.022635 | orchestrator | 2026-04-08 00:48:59 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:48:59.024438 | orchestrator | 2026-04-08 00:48:59 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:48:59.024479 | orchestrator | 2026-04-08 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:02.071523 | orchestrator | 2026-04-08 00:49:02 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:02.073063 | orchestrator | 2026-04-08 00:49:02 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:02.075241 | orchestrator | 2026-04-08 00:49:02 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:49:02.076323 | orchestrator | 2026-04-08 00:49:02 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:02.076369 | orchestrator | 2026-04-08 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:05.123449 | orchestrator | 2026-04-08 00:49:05 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:05.125114 | orchestrator | 2026-04-08 00:49:05 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:05.126974 | orchestrator | 2026-04-08 00:49:05 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:49:05.130255 | orchestrator | 2026-04-08 00:49:05 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:05.130977 | orchestrator | 2026-04-08 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:08.177389 | orchestrator | 2026-04-08 00:49:08 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:08.181577 | orchestrator | 2026-04-08 00:49:08 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:08.185562 | orchestrator | 2026-04-08 00:49:08 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state STARTED 2026-04-08 00:49:08.187576 | orchestrator | 2026-04-08 00:49:08 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:08.187658 | orchestrator | 2026-04-08 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:11.240071 | orchestrator | 2026-04-08 00:49:11 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:11.241901 | orchestrator | 2026-04-08 00:49:11 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:11.244422 | orchestrator | 2026-04-08 00:49:11 | INFO  | Task 52a04774-8f09-4a11-aefc-dc8d473e476b is in state SUCCESS 2026-04-08 00:49:11.245876 | orchestrator | 2026-04-08 00:49:11.245911 | orchestrator | 2026-04-08 00:49:11.245917 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-08 00:49:11.245922 | orchestrator | 2026-04-08 00:49:11.245927 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-08 00:49:11.245931 | orchestrator | Wednesday 08 April 2026 00:48:47 +0000 (0:00:00.184) 0:00:00.184 ******* 2026-04-08 00:49:11.245936 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-08 00:49:11.245941 | orchestrator | 2026-04-08 00:49:11.245945 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-08 00:49:11.245949 | orchestrator | Wednesday 08 April 2026 00:48:48 +0000 (0:00:00.971) 0:00:01.155 ******* 2026-04-08 00:49:11.245954 | orchestrator | changed: [testbed-manager] 2026-04-08 00:49:11.245958 | orchestrator | 2026-04-08 00:49:11.245963 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-08 00:49:11.245968 | orchestrator | Wednesday 08 April 2026 00:48:50 +0000 (0:00:01.336) 0:00:02.492 ******* 2026-04-08 00:49:11.245972 | orchestrator | changed: [testbed-manager] 2026-04-08 00:49:11.245976 | orchestrator | 2026-04-08 00:49:11.245980 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:49:11.245984 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:49:11.245990 | orchestrator | 2026-04-08 00:49:11.246008 | orchestrator | 2026-04-08 00:49:11.246041 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:49:11.246046 | orchestrator | Wednesday 08 April 2026 00:48:50 +0000 (0:00:00.467) 0:00:02.959 ******* 2026-04-08 00:49:11.246051 | orchestrator | =============================================================================== 2026-04-08 00:49:11.246055 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.34s 2026-04-08 00:49:11.246059 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.97s 2026-04-08 00:49:11.246063 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.47s 2026-04-08 00:49:11.246067 | orchestrator | 2026-04-08 00:49:11.246071 | orchestrator | 2026-04-08 00:49:11.246075 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-08 00:49:11.246079 | orchestrator | 2026-04-08 00:49:11.246083 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-08 00:49:11.246087 | orchestrator | Wednesday 08 April 2026 00:48:47 +0000 (0:00:00.194) 0:00:00.194 ******* 2026-04-08 00:49:11.246091 | orchestrator | ok: [testbed-manager] 2026-04-08 00:49:11.246096 | orchestrator | 2026-04-08 00:49:11.246100 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-08 00:49:11.246104 | orchestrator | Wednesday 08 April 2026 00:48:48 +0000 (0:00:00.781) 0:00:00.976 ******* 2026-04-08 00:49:11.246108 | orchestrator | ok: [testbed-manager] 2026-04-08 00:49:11.246112 | orchestrator | 2026-04-08 00:49:11.246116 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-08 00:49:11.246120 | orchestrator | Wednesday 08 April 2026 00:48:48 +0000 (0:00:00.562) 0:00:01.538 ******* 2026-04-08 00:49:11.246124 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-08 00:49:11.246128 | orchestrator | 2026-04-08 00:49:11.246132 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-08 00:49:11.246136 | orchestrator | Wednesday 08 April 2026 00:48:49 +0000 (0:00:00.857) 0:00:02.396 ******* 2026-04-08 00:49:11.246158 | orchestrator | changed: [testbed-manager] 2026-04-08 00:49:11.246162 | orchestrator | 2026-04-08 00:49:11.246172 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-08 00:49:11.246177 | orchestrator | Wednesday 08 April 2026 00:48:50 +0000 (0:00:01.104) 0:00:03.500 ******* 2026-04-08 00:49:11.246181 | orchestrator | changed: [testbed-manager] 2026-04-08 00:49:11.246185 | orchestrator | 2026-04-08 00:49:11.246188 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-08 00:49:11.246192 | orchestrator | Wednesday 08 April 2026 00:48:51 +0000 (0:00:00.487) 0:00:03.987 ******* 2026-04-08 00:49:11.246196 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-08 00:49:11.246201 | orchestrator | 2026-04-08 00:49:11.246204 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-08 00:49:11.246208 | orchestrator | Wednesday 08 April 2026 00:48:52 +0000 (0:00:01.467) 0:00:05.455 ******* 2026-04-08 00:49:11.246212 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-08 00:49:11.246216 | orchestrator | 2026-04-08 00:49:11.246221 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-08 00:49:11.246227 | orchestrator | Wednesday 08 April 2026 00:48:53 +0000 (0:00:00.719) 0:00:06.175 ******* 2026-04-08 00:49:11.246232 | orchestrator | ok: [testbed-manager] 2026-04-08 00:49:11.246238 | orchestrator | 2026-04-08 00:49:11.246244 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-08 00:49:11.246250 | orchestrator | Wednesday 08 April 2026 00:48:53 +0000 (0:00:00.423) 0:00:06.598 ******* 2026-04-08 00:49:11.246256 | orchestrator | ok: [testbed-manager] 2026-04-08 00:49:11.246261 | orchestrator | 2026-04-08 00:49:11.246267 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:49:11.246286 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:49:11.246292 | orchestrator | 2026-04-08 00:49:11.246298 | orchestrator | 2026-04-08 00:49:11.246303 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:49:11.246309 | orchestrator | Wednesday 08 April 2026 00:48:54 +0000 (0:00:00.321) 0:00:06.919 ******* 2026-04-08 00:49:11.246315 | orchestrator | =============================================================================== 2026-04-08 00:49:11.246320 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.47s 2026-04-08 00:49:11.246326 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.10s 2026-04-08 00:49:11.246332 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.86s 2026-04-08 00:49:11.246350 | orchestrator | Get home directory of operator user ------------------------------------- 0.78s 2026-04-08 00:49:11.246357 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.72s 2026-04-08 00:49:11.246364 | orchestrator | Create .kube directory -------------------------------------------------- 0.56s 2026-04-08 00:49:11.246368 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.49s 2026-04-08 00:49:11.246372 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.42s 2026-04-08 00:49:11.246376 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.32s 2026-04-08 00:49:11.246380 | orchestrator | 2026-04-08 00:49:11.246383 | orchestrator | 2026-04-08 00:49:11.246387 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-08 00:49:11.246391 | orchestrator | 2026-04-08 00:49:11.246394 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-08 00:49:11.246398 | orchestrator | Wednesday 08 April 2026 00:46:47 +0000 (0:00:00.111) 0:00:00.111 ******* 2026-04-08 00:49:11.246402 | orchestrator | ok: [localhost] => { 2026-04-08 00:49:11.246406 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-08 00:49:11.246410 | orchestrator | } 2026-04-08 00:49:11.246419 | orchestrator | 2026-04-08 00:49:11.246423 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-08 00:49:11.246427 | orchestrator | Wednesday 08 April 2026 00:46:47 +0000 (0:00:00.051) 0:00:00.163 ******* 2026-04-08 00:49:11.246432 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-08 00:49:11.246437 | orchestrator | ...ignoring 2026-04-08 00:49:11.246441 | orchestrator | 2026-04-08 00:49:11.246445 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-08 00:49:11.246448 | orchestrator | Wednesday 08 April 2026 00:46:50 +0000 (0:00:03.568) 0:00:03.731 ******* 2026-04-08 00:49:11.246452 | orchestrator | skipping: [localhost] 2026-04-08 00:49:11.246456 | orchestrator | 2026-04-08 00:49:11.246460 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-08 00:49:11.246463 | orchestrator | Wednesday 08 April 2026 00:46:50 +0000 (0:00:00.132) 0:00:03.864 ******* 2026-04-08 00:49:11.246467 | orchestrator | ok: [localhost] 2026-04-08 00:49:11.246471 | orchestrator | 2026-04-08 00:49:11.246475 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:49:11.246479 | orchestrator | 2026-04-08 00:49:11.246482 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:49:11.246486 | orchestrator | Wednesday 08 April 2026 00:46:51 +0000 (0:00:00.718) 0:00:04.582 ******* 2026-04-08 00:49:11.246490 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:11.246494 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:11.246497 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:11.246501 | orchestrator | 2026-04-08 00:49:11.246505 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:49:11.246509 | orchestrator | Wednesday 08 April 2026 00:46:52 +0000 (0:00:00.667) 0:00:05.249 ******* 2026-04-08 00:49:11.246512 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-08 00:49:11.246516 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-08 00:49:11.246520 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-08 00:49:11.246524 | orchestrator | 2026-04-08 00:49:11.246528 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-08 00:49:11.246531 | orchestrator | 2026-04-08 00:49:11.246535 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-08 00:49:11.246539 | orchestrator | Wednesday 08 April 2026 00:46:53 +0000 (0:00:00.671) 0:00:05.920 ******* 2026-04-08 00:49:11.246543 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:49:11.246547 | orchestrator | 2026-04-08 00:49:11.246550 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-08 00:49:11.246554 | orchestrator | Wednesday 08 April 2026 00:46:53 +0000 (0:00:00.822) 0:00:06.743 ******* 2026-04-08 00:49:11.246558 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:11.246562 | orchestrator | 2026-04-08 00:49:11.246565 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-08 00:49:11.246569 | orchestrator | Wednesday 08 April 2026 00:46:56 +0000 (0:00:02.447) 0:00:09.191 ******* 2026-04-08 00:49:11.246573 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:11.246576 | orchestrator | 2026-04-08 00:49:11.246580 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-08 00:49:11.246584 | orchestrator | Wednesday 08 April 2026 00:46:56 +0000 (0:00:00.370) 0:00:09.561 ******* 2026-04-08 00:49:11.246588 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:11.246591 | orchestrator | 2026-04-08 00:49:11.246595 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-08 00:49:11.246599 | orchestrator | Wednesday 08 April 2026 00:46:56 +0000 (0:00:00.305) 0:00:09.867 ******* 2026-04-08 00:49:11.246603 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:11.246606 | orchestrator | 2026-04-08 00:49:11.246610 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-08 00:49:11.246622 | orchestrator | Wednesday 08 April 2026 00:46:57 +0000 (0:00:00.349) 0:00:10.217 ******* 2026-04-08 00:49:11.246626 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:11.246629 | orchestrator | 2026-04-08 00:49:11.246633 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-08 00:49:11.246637 | orchestrator | Wednesday 08 April 2026 00:46:57 +0000 (0:00:00.337) 0:00:10.554 ******* 2026-04-08 00:49:11.246641 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:49:11.246644 | orchestrator | 2026-04-08 00:49:11.246648 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-08 00:49:11.246655 | orchestrator | Wednesday 08 April 2026 00:46:58 +0000 (0:00:00.657) 0:00:11.211 ******* 2026-04-08 00:49:11.246659 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:11.246663 | orchestrator | 2026-04-08 00:49:11.246667 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-08 00:49:11.246670 | orchestrator | Wednesday 08 April 2026 00:46:59 +0000 (0:00:01.001) 0:00:12.212 ******* 2026-04-08 00:49:11.246674 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:11.246678 | orchestrator | 2026-04-08 00:49:11.246681 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-08 00:49:11.246685 | orchestrator | Wednesday 08 April 2026 00:47:00 +0000 (0:00:00.788) 0:00:13.000 ******* 2026-04-08 00:49:11.246689 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:11.246693 | orchestrator | 2026-04-08 00:49:11.246696 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-08 00:49:11.246700 | orchestrator | Wednesday 08 April 2026 00:47:00 +0000 (0:00:00.310) 0:00:13.311 ******* 2026-04-08 00:49:11.246708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:49:11.246715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:49:11.246723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:49:11.246730 | orchestrator | 2026-04-08 00:49:11.246734 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-08 00:49:11.246738 | orchestrator | Wednesday 08 April 2026 00:47:01 +0000 (0:00:01.240) 0:00:14.552 ******* 2026-04-08 00:49:11.246746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:49:11.246750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:49:11.246754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:49:11.246761 | orchestrator | 2026-04-08 00:49:11.246765 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-08 00:49:11.246769 | orchestrator | Wednesday 08 April 2026 00:47:03 +0000 (0:00:02.264) 0:00:16.817 ******* 2026-04-08 00:49:11.246773 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-08 00:49:11.246777 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-08 00:49:11.246780 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-08 00:49:11.246784 | orchestrator | 2026-04-08 00:49:11.246788 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-08 00:49:11.246794 | orchestrator | Wednesday 08 April 2026 00:47:05 +0000 (0:00:01.727) 0:00:18.544 ******* 2026-04-08 00:49:11.246798 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-08 00:49:11.246802 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-08 00:49:11.246806 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-08 00:49:11.246810 | orchestrator | 2026-04-08 00:49:11.246813 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-08 00:49:11.246820 | orchestrator | Wednesday 08 April 2026 00:47:09 +0000 (0:00:03.866) 0:00:22.410 ******* 2026-04-08 00:49:11.246824 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-08 00:49:11.246828 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-08 00:49:11.246832 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-08 00:49:11.246835 | orchestrator | 2026-04-08 00:49:11.246839 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-08 00:49:11.246843 | orchestrator | Wednesday 08 April 2026 00:47:10 +0000 (0:00:01.226) 0:00:23.637 ******* 2026-04-08 00:49:11.246847 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-08 00:49:11.246851 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-08 00:49:11.246854 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-08 00:49:11.246858 | orchestrator | 2026-04-08 00:49:11.246862 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-08 00:49:11.246866 | orchestrator | Wednesday 08 April 2026 00:47:12 +0000 (0:00:01.823) 0:00:25.460 ******* 2026-04-08 00:49:11.246869 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-08 00:49:11.246875 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-08 00:49:11.246881 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-08 00:49:11.246888 | orchestrator | 2026-04-08 00:49:11.246894 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-08 00:49:11.246901 | orchestrator | Wednesday 08 April 2026 00:47:14 +0000 (0:00:01.648) 0:00:27.109 ******* 2026-04-08 00:49:11.246907 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-08 00:49:11.246913 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-08 00:49:11.246919 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-08 00:49:11.246930 | orchestrator | 2026-04-08 00:49:11.246934 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-08 00:49:11.246937 | orchestrator | Wednesday 08 April 2026 00:47:16 +0000 (0:00:02.119) 0:00:29.228 ******* 2026-04-08 00:49:11.246941 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:11.246945 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:11.246949 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:11.246952 | orchestrator | 2026-04-08 00:49:11.246956 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-08 00:49:11.246960 | orchestrator | Wednesday 08 April 2026 00:47:17 +0000 (0:00:01.602) 0:00:30.831 ******* 2026-04-08 00:49:11.246964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:49:11.246974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:49:11.246978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:49:11.246982 | orchestrator | 2026-04-08 00:49:11.246986 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-08 00:49:11.247035 | orchestrator | Wednesday 08 April 2026 00:47:19 +0000 (0:00:01.584) 0:00:32.415 ******* 2026-04-08 00:49:11.247039 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:11.247043 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:11.247047 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:11.247051 | orchestrator | 2026-04-08 00:49:11.247055 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-08 00:49:11.247058 | orchestrator | Wednesday 08 April 2026 00:47:20 +0000 (0:00:01.094) 0:00:33.510 ******* 2026-04-08 00:49:11.247062 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:11.247073 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:11.247077 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:11.247080 | orchestrator | 2026-04-08 00:49:11.247089 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-08 00:49:11.247093 | orchestrator | Wednesday 08 April 2026 00:47:30 +0000 (0:00:10.125) 0:00:43.636 ******* 2026-04-08 00:49:11.247097 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:11.247101 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:11.247105 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:11.247108 | orchestrator | 2026-04-08 00:49:11.247112 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-08 00:49:11.247116 | orchestrator | 2026-04-08 00:49:11.247120 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-08 00:49:11.247123 | orchestrator | Wednesday 08 April 2026 00:47:31 +0000 (0:00:00.466) 0:00:44.103 ******* 2026-04-08 00:49:11.247127 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:11.247131 | orchestrator | 2026-04-08 00:49:11.247134 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-08 00:49:11.247138 | orchestrator | Wednesday 08 April 2026 00:47:31 +0000 (0:00:00.616) 0:00:44.720 ******* 2026-04-08 00:49:11.247142 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:11.247146 | orchestrator | 2026-04-08 00:49:11.247150 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-08 00:49:11.247153 | orchestrator | Wednesday 08 April 2026 00:47:32 +0000 (0:00:00.363) 0:00:45.083 ******* 2026-04-08 00:49:11.247157 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:11.247161 | orchestrator | 2026-04-08 00:49:11.247165 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-08 00:49:11.247168 | orchestrator | Wednesday 08 April 2026 00:47:39 +0000 (0:00:07.317) 0:00:52.400 ******* 2026-04-08 00:49:11.247172 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:11.247176 | orchestrator | 2026-04-08 00:49:11.247180 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-08 00:49:11.247183 | orchestrator | 2026-04-08 00:49:11.247187 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-08 00:49:11.247191 | orchestrator | Wednesday 08 April 2026 00:48:29 +0000 (0:00:49.837) 0:01:42.238 ******* 2026-04-08 00:49:11.247194 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:11.247198 | orchestrator | 2026-04-08 00:49:11.247202 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-08 00:49:11.247206 | orchestrator | Wednesday 08 April 2026 00:48:30 +0000 (0:00:00.765) 0:01:43.004 ******* 2026-04-08 00:49:11.247211 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:11.247217 | orchestrator | 2026-04-08 00:49:11.247224 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-08 00:49:11.247230 | orchestrator | Wednesday 08 April 2026 00:48:30 +0000 (0:00:00.295) 0:01:43.300 ******* 2026-04-08 00:49:11.247236 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:11.247242 | orchestrator | 2026-04-08 00:49:11.247252 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-08 00:49:11.247258 | orchestrator | Wednesday 08 April 2026 00:48:32 +0000 (0:00:01.824) 0:01:45.124 ******* 2026-04-08 00:49:11.247265 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:11.247276 | orchestrator | 2026-04-08 00:49:11.247282 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-08 00:49:11.247288 | orchestrator | 2026-04-08 00:49:11.247294 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-08 00:49:11.247300 | orchestrator | Wednesday 08 April 2026 00:48:47 +0000 (0:00:15.722) 0:02:00.846 ******* 2026-04-08 00:49:11.247307 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:11.247313 | orchestrator | 2026-04-08 00:49:11.247324 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-08 00:49:11.247331 | orchestrator | Wednesday 08 April 2026 00:48:48 +0000 (0:00:00.945) 0:02:01.792 ******* 2026-04-08 00:49:11.247337 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:11.247344 | orchestrator | 2026-04-08 00:49:11.247351 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-08 00:49:11.247355 | orchestrator | Wednesday 08 April 2026 00:48:49 +0000 (0:00:00.192) 0:02:01.985 ******* 2026-04-08 00:49:11.247359 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:11.247363 | orchestrator | 2026-04-08 00:49:11.247366 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-08 00:49:11.247370 | orchestrator | Wednesday 08 April 2026 00:48:51 +0000 (0:00:01.940) 0:02:03.926 ******* 2026-04-08 00:49:11.247374 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:11.247378 | orchestrator | 2026-04-08 00:49:11.247381 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-08 00:49:11.247385 | orchestrator | 2026-04-08 00:49:11.247389 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-08 00:49:11.247392 | orchestrator | Wednesday 08 April 2026 00:49:05 +0000 (0:00:14.571) 0:02:18.497 ******* 2026-04-08 00:49:11.247396 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:49:11.247400 | orchestrator | 2026-04-08 00:49:11.247404 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-08 00:49:11.247407 | orchestrator | Wednesday 08 April 2026 00:49:06 +0000 (0:00:00.729) 0:02:19.227 ******* 2026-04-08 00:49:11.247411 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:11.247415 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:11.247418 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:11.247422 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-08 00:49:11.247426 | orchestrator | enable_outward_rabbitmq_True 2026-04-08 00:49:11.247430 | orchestrator | 2026-04-08 00:49:11.247433 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-08 00:49:11.247437 | orchestrator | skipping: no hosts matched 2026-04-08 00:49:11.247441 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-08 00:49:11.247445 | orchestrator | outward_rabbitmq_restart 2026-04-08 00:49:11.247448 | orchestrator | 2026-04-08 00:49:11.247452 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-08 00:49:11.247456 | orchestrator | skipping: no hosts matched 2026-04-08 00:49:11.247459 | orchestrator | 2026-04-08 00:49:11.247463 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-08 00:49:11.247467 | orchestrator | skipping: no hosts matched 2026-04-08 00:49:11.247470 | orchestrator | 2026-04-08 00:49:11.247474 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:49:11.247478 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-08 00:49:11.247482 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-08 00:49:11.247486 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:49:11.247490 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:49:11.247501 | orchestrator | 2026-04-08 00:49:11.247504 | orchestrator | 2026-04-08 00:49:11.247508 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:49:11.247512 | orchestrator | Wednesday 08 April 2026 00:49:08 +0000 (0:00:02.511) 0:02:21.739 ******* 2026-04-08 00:49:11.247516 | orchestrator | =============================================================================== 2026-04-08 00:49:11.247519 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 80.13s 2026-04-08 00:49:11.247523 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 11.08s 2026-04-08 00:49:11.247527 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------ 10.13s 2026-04-08 00:49:11.247530 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.87s 2026-04-08 00:49:11.247534 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.57s 2026-04-08 00:49:11.247538 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.51s 2026-04-08 00:49:11.247541 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.45s 2026-04-08 00:49:11.247545 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.33s 2026-04-08 00:49:11.247549 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.27s 2026-04-08 00:49:11.247553 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.12s 2026-04-08 00:49:11.247556 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.82s 2026-04-08 00:49:11.247563 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.73s 2026-04-08 00:49:11.247567 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.65s 2026-04-08 00:49:11.247570 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.60s 2026-04-08 00:49:11.247574 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.58s 2026-04-08 00:49:11.247578 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.24s 2026-04-08 00:49:11.247582 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.23s 2026-04-08 00:49:11.247588 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.09s 2026-04-08 00:49:11.247592 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.00s 2026-04-08 00:49:11.247595 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.85s 2026-04-08 00:49:11.247599 | orchestrator | 2026-04-08 00:49:11 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:11.247603 | orchestrator | 2026-04-08 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:14.297540 | orchestrator | 2026-04-08 00:49:14 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:14.298706 | orchestrator | 2026-04-08 00:49:14 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:14.300302 | orchestrator | 2026-04-08 00:49:14 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:14.300344 | orchestrator | 2026-04-08 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:17.349503 | orchestrator | 2026-04-08 00:49:17 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:17.350245 | orchestrator | 2026-04-08 00:49:17 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:17.351982 | orchestrator | 2026-04-08 00:49:17 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:17.352066 | orchestrator | 2026-04-08 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:20.388426 | orchestrator | 2026-04-08 00:49:20 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:20.388543 | orchestrator | 2026-04-08 00:49:20 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:20.388818 | orchestrator | 2026-04-08 00:49:20 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:20.388841 | orchestrator | 2026-04-08 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:23.421555 | orchestrator | 2026-04-08 00:49:23 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:23.421961 | orchestrator | 2026-04-08 00:49:23 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:23.423861 | orchestrator | 2026-04-08 00:49:23 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:23.423906 | orchestrator | 2026-04-08 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:26.466379 | orchestrator | 2026-04-08 00:49:26 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:26.469157 | orchestrator | 2026-04-08 00:49:26 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:26.470932 | orchestrator | 2026-04-08 00:49:26 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:26.471172 | orchestrator | 2026-04-08 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:29.508428 | orchestrator | 2026-04-08 00:49:29 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:29.509750 | orchestrator | 2026-04-08 00:49:29 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:29.511309 | orchestrator | 2026-04-08 00:49:29 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:29.511480 | orchestrator | 2026-04-08 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:32.556514 | orchestrator | 2026-04-08 00:49:32 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:32.556587 | orchestrator | 2026-04-08 00:49:32 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:32.556593 | orchestrator | 2026-04-08 00:49:32 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:32.557194 | orchestrator | 2026-04-08 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:35.609147 | orchestrator | 2026-04-08 00:49:35 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:35.609274 | orchestrator | 2026-04-08 00:49:35 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:35.610377 | orchestrator | 2026-04-08 00:49:35 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:35.610434 | orchestrator | 2026-04-08 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:38.649305 | orchestrator | 2026-04-08 00:49:38 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:38.649907 | orchestrator | 2026-04-08 00:49:38 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:38.651053 | orchestrator | 2026-04-08 00:49:38 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:38.651135 | orchestrator | 2026-04-08 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:41.689417 | orchestrator | 2026-04-08 00:49:41 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:41.691004 | orchestrator | 2026-04-08 00:49:41 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:41.692155 | orchestrator | 2026-04-08 00:49:41 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:41.692350 | orchestrator | 2026-04-08 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:44.739245 | orchestrator | 2026-04-08 00:49:44 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:44.742929 | orchestrator | 2026-04-08 00:49:44 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:44.743750 | orchestrator | 2026-04-08 00:49:44 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:44.743787 | orchestrator | 2026-04-08 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:47.785900 | orchestrator | 2026-04-08 00:49:47 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:47.786166 | orchestrator | 2026-04-08 00:49:47 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:47.787127 | orchestrator | 2026-04-08 00:49:47 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:47.787675 | orchestrator | 2026-04-08 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:50.829542 | orchestrator | 2026-04-08 00:49:50 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:50.830250 | orchestrator | 2026-04-08 00:49:50 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:50.833963 | orchestrator | 2026-04-08 00:49:50 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:50.834122 | orchestrator | 2026-04-08 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:53.882394 | orchestrator | 2026-04-08 00:49:53 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:53.882616 | orchestrator | 2026-04-08 00:49:53 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:53.883640 | orchestrator | 2026-04-08 00:49:53 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:53.883677 | orchestrator | 2026-04-08 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:56.916239 | orchestrator | 2026-04-08 00:49:56 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state STARTED 2026-04-08 00:49:56.918456 | orchestrator | 2026-04-08 00:49:56 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:56.918733 | orchestrator | 2026-04-08 00:49:56 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:56.918754 | orchestrator | 2026-04-08 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:59.964557 | orchestrator | 2026-04-08 00:49:59 | INFO  | Task d03a930c-777c-40f2-a261-bb9265180f4a is in state SUCCESS 2026-04-08 00:49:59.967430 | orchestrator | 2026-04-08 00:49:59.967504 | orchestrator | 2026-04-08 00:49:59.967513 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:49:59.967522 | orchestrator | 2026-04-08 00:49:59.967528 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:49:59.967533 | orchestrator | Wednesday 08 April 2026 00:47:32 +0000 (0:00:00.413) 0:00:00.413 ******* 2026-04-08 00:49:59.967537 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.967548 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.967552 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.967556 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:49:59.967574 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:49:59.967578 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:49:59.967581 | orchestrator | 2026-04-08 00:49:59.967585 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:49:59.967589 | orchestrator | Wednesday 08 April 2026 00:47:33 +0000 (0:00:01.343) 0:00:01.757 ******* 2026-04-08 00:49:59.967593 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-08 00:49:59.967597 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-08 00:49:59.967601 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-08 00:49:59.967605 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-08 00:49:59.967608 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-08 00:49:59.967612 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-08 00:49:59.967616 | orchestrator | 2026-04-08 00:49:59.967620 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-08 00:49:59.967624 | orchestrator | 2026-04-08 00:49:59.967628 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-08 00:49:59.967632 | orchestrator | Wednesday 08 April 2026 00:47:34 +0000 (0:00:01.417) 0:00:03.175 ******* 2026-04-08 00:49:59.967636 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:49:59.967642 | orchestrator | 2026-04-08 00:49:59.967646 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-08 00:49:59.967649 | orchestrator | Wednesday 08 April 2026 00:47:36 +0000 (0:00:01.108) 0:00:04.283 ******* 2026-04-08 00:49:59.967655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967669 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967673 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967677 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967684 | orchestrator | 2026-04-08 00:49:59.967699 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-08 00:49:59.967703 | orchestrator | Wednesday 08 April 2026 00:47:37 +0000 (0:00:01.546) 0:00:05.829 ******* 2026-04-08 00:49:59.967709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967721 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967725 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967728 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967732 | orchestrator | 2026-04-08 00:49:59.967736 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-08 00:49:59.967740 | orchestrator | Wednesday 08 April 2026 00:47:39 +0000 (0:00:01.799) 0:00:07.629 ******* 2026-04-08 00:49:59.967744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967763 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967767 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967771 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967775 | orchestrator | 2026-04-08 00:49:59.967778 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-08 00:49:59.967782 | orchestrator | Wednesday 08 April 2026 00:47:40 +0000 (0:00:01.336) 0:00:08.965 ******* 2026-04-08 00:49:59.967786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967831 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967841 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967845 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967849 | orchestrator | 2026-04-08 00:49:59.967857 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-08 00:49:59.967863 | orchestrator | Wednesday 08 April 2026 00:47:42 +0000 (0:00:01.843) 0:00:10.809 ******* 2026-04-08 00:49:59.967872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967903 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.967913 | orchestrator | 2026-04-08 00:49:59.967920 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-08 00:49:59.967926 | orchestrator | Wednesday 08 April 2026 00:47:44 +0000 (0:00:01.792) 0:00:12.601 ******* 2026-04-08 00:49:59.967932 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:59.967938 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:59.967944 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:59.967949 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:49:59.967955 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:49:59.967961 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:49:59.967966 | orchestrator | 2026-04-08 00:49:59.967973 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-08 00:49:59.967979 | orchestrator | Wednesday 08 April 2026 00:47:46 +0000 (0:00:02.502) 0:00:15.104 ******* 2026-04-08 00:49:59.967985 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-08 00:49:59.967992 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-08 00:49:59.967998 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-08 00:49:59.968004 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-08 00:49:59.968011 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-08 00:49:59.968018 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-08 00:49:59.968084 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-08 00:49:59.968092 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-08 00:49:59.968104 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-08 00:49:59.968111 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-08 00:49:59.968117 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-08 00:49:59.968123 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-08 00:49:59.968135 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-08 00:49:59.968143 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-08 00:49:59.968150 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-08 00:49:59.968156 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-08 00:49:59.968164 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-08 00:49:59.968169 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-08 00:49:59.968173 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-08 00:49:59.968178 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-08 00:49:59.968183 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-08 00:49:59.968187 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-08 00:49:59.968192 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-08 00:49:59.968201 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-08 00:49:59.968206 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-08 00:49:59.968210 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-08 00:49:59.968214 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-08 00:49:59.968219 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-08 00:49:59.968223 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-08 00:49:59.968227 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-08 00:49:59.968231 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-08 00:49:59.968235 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-08 00:49:59.968240 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-08 00:49:59.968244 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-08 00:49:59.968248 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-08 00:49:59.968253 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-08 00:49:59.968257 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-08 00:49:59.968262 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-08 00:49:59.968266 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-08 00:49:59.968271 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-08 00:49:59.968275 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-08 00:49:59.968279 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-08 00:49:59.968284 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-08 00:49:59.968289 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-08 00:49:59.968297 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-08 00:49:59.968301 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-08 00:49:59.968309 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-08 00:49:59.968313 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-08 00:49:59.968318 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-08 00:49:59.968322 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-08 00:49:59.968327 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-08 00:49:59.968335 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-08 00:49:59.968339 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-08 00:49:59.968344 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-08 00:49:59.968348 | orchestrator | 2026-04-08 00:49:59.968353 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-08 00:49:59.968357 | orchestrator | Wednesday 08 April 2026 00:48:04 +0000 (0:00:17.856) 0:00:32.960 ******* 2026-04-08 00:49:59.968361 | orchestrator | 2026-04-08 00:49:59.968365 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-08 00:49:59.968369 | orchestrator | Wednesday 08 April 2026 00:48:04 +0000 (0:00:00.066) 0:00:33.026 ******* 2026-04-08 00:49:59.968372 | orchestrator | 2026-04-08 00:49:59.968376 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-08 00:49:59.968380 | orchestrator | Wednesday 08 April 2026 00:48:04 +0000 (0:00:00.066) 0:00:33.092 ******* 2026-04-08 00:49:59.968383 | orchestrator | 2026-04-08 00:49:59.968387 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-08 00:49:59.968391 | orchestrator | Wednesday 08 April 2026 00:48:04 +0000 (0:00:00.074) 0:00:33.166 ******* 2026-04-08 00:49:59.968395 | orchestrator | 2026-04-08 00:49:59.968398 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-08 00:49:59.968402 | orchestrator | Wednesday 08 April 2026 00:48:05 +0000 (0:00:00.135) 0:00:33.302 ******* 2026-04-08 00:49:59.968406 | orchestrator | 2026-04-08 00:49:59.968410 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-08 00:49:59.968413 | orchestrator | Wednesday 08 April 2026 00:48:05 +0000 (0:00:00.068) 0:00:33.370 ******* 2026-04-08 00:49:59.968417 | orchestrator | 2026-04-08 00:49:59.968421 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-08 00:49:59.968451 | orchestrator | Wednesday 08 April 2026 00:48:05 +0000 (0:00:00.068) 0:00:33.439 ******* 2026-04-08 00:49:59.968455 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.968459 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.968468 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.968472 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:49:59.968476 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:49:59.968479 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:49:59.968483 | orchestrator | 2026-04-08 00:49:59.968487 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-08 00:49:59.968491 | orchestrator | Wednesday 08 April 2026 00:48:08 +0000 (0:00:02.843) 0:00:36.282 ******* 2026-04-08 00:49:59.968494 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:59.968498 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:59.968502 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:49:59.968506 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:59.968509 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:49:59.968513 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:49:59.968517 | orchestrator | 2026-04-08 00:49:59.968520 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-08 00:49:59.968524 | orchestrator | 2026-04-08 00:49:59.968528 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-08 00:49:59.968532 | orchestrator | Wednesday 08 April 2026 00:48:39 +0000 (0:00:31.781) 0:01:08.063 ******* 2026-04-08 00:49:59.968535 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:49:59.968539 | orchestrator | 2026-04-08 00:49:59.968543 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-08 00:49:59.968547 | orchestrator | Wednesday 08 April 2026 00:48:40 +0000 (0:00:00.671) 0:01:08.735 ******* 2026-04-08 00:49:59.968550 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:49:59.968557 | orchestrator | 2026-04-08 00:49:59.968561 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-08 00:49:59.968565 | orchestrator | Wednesday 08 April 2026 00:48:41 +0000 (0:00:00.693) 0:01:09.428 ******* 2026-04-08 00:49:59.968569 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.968572 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.968576 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.968580 | orchestrator | 2026-04-08 00:49:59.968584 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-08 00:49:59.968587 | orchestrator | Wednesday 08 April 2026 00:48:42 +0000 (0:00:01.006) 0:01:10.434 ******* 2026-04-08 00:49:59.968591 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.968595 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.968599 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.968605 | orchestrator | 2026-04-08 00:49:59.968609 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-08 00:49:59.968613 | orchestrator | Wednesday 08 April 2026 00:48:43 +0000 (0:00:00.795) 0:01:11.230 ******* 2026-04-08 00:49:59.968617 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.968620 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.968624 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.968628 | orchestrator | 2026-04-08 00:49:59.968634 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-08 00:49:59.968638 | orchestrator | Wednesday 08 April 2026 00:48:43 +0000 (0:00:00.711) 0:01:11.942 ******* 2026-04-08 00:49:59.968642 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.968645 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.968649 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.968653 | orchestrator | 2026-04-08 00:49:59.968657 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-08 00:49:59.968660 | orchestrator | Wednesday 08 April 2026 00:48:44 +0000 (0:00:00.423) 0:01:12.366 ******* 2026-04-08 00:49:59.968664 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.968668 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.968671 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.968675 | orchestrator | 2026-04-08 00:49:59.968679 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-08 00:49:59.968683 | orchestrator | Wednesday 08 April 2026 00:48:44 +0000 (0:00:00.437) 0:01:12.804 ******* 2026-04-08 00:49:59.968686 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.968690 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.968694 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.968698 | orchestrator | 2026-04-08 00:49:59.968701 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-08 00:49:59.968705 | orchestrator | Wednesday 08 April 2026 00:48:44 +0000 (0:00:00.286) 0:01:13.090 ******* 2026-04-08 00:49:59.968709 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.968713 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.968716 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.968720 | orchestrator | 2026-04-08 00:49:59.968724 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-08 00:49:59.968728 | orchestrator | Wednesday 08 April 2026 00:48:45 +0000 (0:00:00.452) 0:01:13.543 ******* 2026-04-08 00:49:59.968731 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.968735 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.968739 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.968743 | orchestrator | 2026-04-08 00:49:59.968746 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-08 00:49:59.968750 | orchestrator | Wednesday 08 April 2026 00:48:45 +0000 (0:00:00.312) 0:01:13.856 ******* 2026-04-08 00:49:59.968754 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.968758 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.968761 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.968765 | orchestrator | 2026-04-08 00:49:59.968772 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-08 00:49:59.968776 | orchestrator | Wednesday 08 April 2026 00:48:45 +0000 (0:00:00.305) 0:01:14.161 ******* 2026-04-08 00:49:59.968780 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.968784 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.968787 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.968791 | orchestrator | 2026-04-08 00:49:59.968795 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-08 00:49:59.968799 | orchestrator | Wednesday 08 April 2026 00:48:46 +0000 (0:00:00.420) 0:01:14.582 ******* 2026-04-08 00:49:59.968802 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.968806 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.968810 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.968814 | orchestrator | 2026-04-08 00:49:59.968817 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-08 00:49:59.968821 | orchestrator | Wednesday 08 April 2026 00:48:47 +0000 (0:00:01.103) 0:01:15.685 ******* 2026-04-08 00:49:59.968825 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.968829 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.968832 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.968836 | orchestrator | 2026-04-08 00:49:59.968840 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-08 00:49:59.968843 | orchestrator | Wednesday 08 April 2026 00:48:48 +0000 (0:00:00.776) 0:01:16.462 ******* 2026-04-08 00:49:59.968847 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.968851 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.968855 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.968858 | orchestrator | 2026-04-08 00:49:59.968862 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-08 00:49:59.968866 | orchestrator | Wednesday 08 April 2026 00:48:48 +0000 (0:00:00.398) 0:01:16.860 ******* 2026-04-08 00:49:59.968870 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.968873 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.968877 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.968881 | orchestrator | 2026-04-08 00:49:59.968885 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-08 00:49:59.968888 | orchestrator | Wednesday 08 April 2026 00:48:49 +0000 (0:00:00.571) 0:01:17.431 ******* 2026-04-08 00:49:59.968892 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.968896 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.968899 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.968903 | orchestrator | 2026-04-08 00:49:59.968907 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-08 00:49:59.968911 | orchestrator | Wednesday 08 April 2026 00:48:49 +0000 (0:00:00.337) 0:01:17.769 ******* 2026-04-08 00:49:59.968915 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.968918 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.968922 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.968926 | orchestrator | 2026-04-08 00:49:59.968930 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-08 00:49:59.968933 | orchestrator | Wednesday 08 April 2026 00:48:50 +0000 (0:00:00.468) 0:01:18.237 ******* 2026-04-08 00:49:59.968937 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.968941 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.968947 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.968951 | orchestrator | 2026-04-08 00:49:59.968957 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-08 00:49:59.968963 | orchestrator | Wednesday 08 April 2026 00:48:50 +0000 (0:00:00.441) 0:01:18.679 ******* 2026-04-08 00:49:59.968971 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:49:59.968981 | orchestrator | 2026-04-08 00:49:59.968989 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-08 00:49:59.969000 | orchestrator | Wednesday 08 April 2026 00:48:50 +0000 (0:00:00.521) 0:01:19.200 ******* 2026-04-08 00:49:59.969005 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.969011 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.969017 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.969039 | orchestrator | 2026-04-08 00:49:59.969047 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-08 00:49:59.969052 | orchestrator | Wednesday 08 April 2026 00:48:51 +0000 (0:00:00.620) 0:01:19.820 ******* 2026-04-08 00:49:59.969058 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.969063 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.969069 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.969075 | orchestrator | 2026-04-08 00:49:59.969081 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-08 00:49:59.969086 | orchestrator | Wednesday 08 April 2026 00:48:51 +0000 (0:00:00.390) 0:01:20.211 ******* 2026-04-08 00:49:59.969092 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.969098 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.969104 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.969110 | orchestrator | 2026-04-08 00:49:59.969116 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-08 00:49:59.969122 | orchestrator | Wednesday 08 April 2026 00:48:52 +0000 (0:00:00.292) 0:01:20.504 ******* 2026-04-08 00:49:59.969129 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.969134 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.969141 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.969147 | orchestrator | 2026-04-08 00:49:59.969151 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-08 00:49:59.969155 | orchestrator | Wednesday 08 April 2026 00:48:52 +0000 (0:00:00.297) 0:01:20.802 ******* 2026-04-08 00:49:59.969159 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.969163 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.969166 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.969170 | orchestrator | 2026-04-08 00:49:59.969174 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-08 00:49:59.969178 | orchestrator | Wednesday 08 April 2026 00:48:53 +0000 (0:00:00.467) 0:01:21.269 ******* 2026-04-08 00:49:59.969182 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.969186 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.969189 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.969193 | orchestrator | 2026-04-08 00:49:59.969197 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-08 00:49:59.969201 | orchestrator | Wednesday 08 April 2026 00:48:53 +0000 (0:00:00.313) 0:01:21.583 ******* 2026-04-08 00:49:59.969204 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.969208 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.969212 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.969215 | orchestrator | 2026-04-08 00:49:59.969219 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-08 00:49:59.969223 | orchestrator | Wednesday 08 April 2026 00:48:53 +0000 (0:00:00.283) 0:01:21.866 ******* 2026-04-08 00:49:59.969227 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.969230 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.969234 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.969238 | orchestrator | 2026-04-08 00:49:59.969241 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-08 00:49:59.969245 | orchestrator | Wednesday 08 April 2026 00:48:53 +0000 (0:00:00.272) 0:01:22.138 ******* 2026-04-08 00:49:59.969250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969312 | orchestrator | 2026-04-08 00:49:59.969316 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-08 00:49:59.969320 | orchestrator | Wednesday 08 April 2026 00:48:55 +0000 (0:00:02.048) 0:01:24.187 ******* 2026-04-08 00:49:59.969324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969368 | orchestrator | 2026-04-08 00:49:59.969371 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-08 00:49:59.969375 | orchestrator | Wednesday 08 April 2026 00:49:00 +0000 (0:00:04.212) 0:01:28.400 ******* 2026-04-08 00:49:59.969379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969425 | orchestrator | 2026-04-08 00:49:59.969428 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-08 00:49:59.969432 | orchestrator | Wednesday 08 April 2026 00:49:02 +0000 (0:00:02.414) 0:01:30.814 ******* 2026-04-08 00:49:59.969436 | orchestrator | 2026-04-08 00:49:59.969440 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-08 00:49:59.969444 | orchestrator | Wednesday 08 April 2026 00:49:02 +0000 (0:00:00.064) 0:01:30.879 ******* 2026-04-08 00:49:59.969447 | orchestrator | 2026-04-08 00:49:59.969451 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-08 00:49:59.969455 | orchestrator | Wednesday 08 April 2026 00:49:02 +0000 (0:00:00.070) 0:01:30.949 ******* 2026-04-08 00:49:59.969459 | orchestrator | 2026-04-08 00:49:59.969466 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-08 00:49:59.969469 | orchestrator | Wednesday 08 April 2026 00:49:02 +0000 (0:00:00.069) 0:01:31.018 ******* 2026-04-08 00:49:59.969473 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:59.969477 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:59.969481 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:59.969485 | orchestrator | 2026-04-08 00:49:59.969488 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-08 00:49:59.969492 | orchestrator | Wednesday 08 April 2026 00:49:10 +0000 (0:00:07.708) 0:01:38.727 ******* 2026-04-08 00:49:59.969496 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:59.969500 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:59.969503 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:59.969507 | orchestrator | 2026-04-08 00:49:59.969511 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-08 00:49:59.969514 | orchestrator | Wednesday 08 April 2026 00:49:18 +0000 (0:00:07.536) 0:01:46.263 ******* 2026-04-08 00:49:59.969518 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:59.969522 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:59.969526 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:59.969529 | orchestrator | 2026-04-08 00:49:59.969533 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-08 00:49:59.969537 | orchestrator | Wednesday 08 April 2026 00:49:25 +0000 (0:00:07.840) 0:01:54.104 ******* 2026-04-08 00:49:59.969541 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.969544 | orchestrator | 2026-04-08 00:49:59.969548 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-08 00:49:59.969552 | orchestrator | Wednesday 08 April 2026 00:49:25 +0000 (0:00:00.109) 0:01:54.213 ******* 2026-04-08 00:49:59.969555 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.969559 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.969563 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.969567 | orchestrator | 2026-04-08 00:49:59.969570 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-08 00:49:59.969574 | orchestrator | Wednesday 08 April 2026 00:49:26 +0000 (0:00:00.889) 0:01:55.103 ******* 2026-04-08 00:49:59.969578 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.969582 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.969585 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:59.969589 | orchestrator | 2026-04-08 00:49:59.969593 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-08 00:49:59.969597 | orchestrator | Wednesday 08 April 2026 00:49:27 +0000 (0:00:00.825) 0:01:55.928 ******* 2026-04-08 00:49:59.969600 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.969604 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.969608 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.969612 | orchestrator | 2026-04-08 00:49:59.969615 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-08 00:49:59.969619 | orchestrator | Wednesday 08 April 2026 00:49:28 +0000 (0:00:00.734) 0:01:56.663 ******* 2026-04-08 00:49:59.969623 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.969626 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.969630 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:59.969634 | orchestrator | 2026-04-08 00:49:59.969638 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-08 00:49:59.969641 | orchestrator | Wednesday 08 April 2026 00:49:29 +0000 (0:00:00.653) 0:01:57.316 ******* 2026-04-08 00:49:59.969645 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.969649 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.969655 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.969659 | orchestrator | 2026-04-08 00:49:59.969663 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-08 00:49:59.969667 | orchestrator | Wednesday 08 April 2026 00:49:29 +0000 (0:00:00.808) 0:01:58.125 ******* 2026-04-08 00:49:59.969671 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.969677 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.969681 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.969684 | orchestrator | 2026-04-08 00:49:59.969691 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-08 00:49:59.969695 | orchestrator | Wednesday 08 April 2026 00:49:30 +0000 (0:00:00.823) 0:01:58.949 ******* 2026-04-08 00:49:59.969699 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.969703 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.969707 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.969710 | orchestrator | 2026-04-08 00:49:59.969714 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-08 00:49:59.969718 | orchestrator | Wednesday 08 April 2026 00:49:31 +0000 (0:00:00.596) 0:01:59.545 ******* 2026-04-08 00:49:59.969722 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969726 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969730 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969734 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969738 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969743 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969748 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969754 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969774 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969781 | orchestrator | 2026-04-08 00:49:59.969787 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-08 00:49:59.969793 | orchestrator | Wednesday 08 April 2026 00:49:33 +0000 (0:00:01.764) 0:02:01.310 ******* 2026-04-08 00:49:59.969803 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969809 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969814 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969820 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969839 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969862 | orchestrator | 2026-04-08 00:49:59.969868 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-08 00:49:59.969874 | orchestrator | Wednesday 08 April 2026 00:49:37 +0000 (0:00:04.065) 0:02:05.376 ******* 2026-04-08 00:49:59.969885 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969892 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969896 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969908 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969919 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:49:59.969927 | orchestrator | 2026-04-08 00:49:59.969931 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-08 00:49:59.969934 | orchestrator | Wednesday 08 April 2026 00:49:40 +0000 (0:00:02.972) 0:02:08.348 ******* 2026-04-08 00:49:59.969938 | orchestrator | 2026-04-08 00:49:59.969942 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-08 00:49:59.969946 | orchestrator | Wednesday 08 April 2026 00:49:40 +0000 (0:00:00.068) 0:02:08.416 ******* 2026-04-08 00:49:59.969949 | orchestrator | 2026-04-08 00:49:59.969953 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-08 00:49:59.969957 | orchestrator | Wednesday 08 April 2026 00:49:40 +0000 (0:00:00.313) 0:02:08.730 ******* 2026-04-08 00:49:59.969961 | orchestrator | 2026-04-08 00:49:59.969964 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-08 00:49:59.969968 | orchestrator | Wednesday 08 April 2026 00:49:40 +0000 (0:00:00.084) 0:02:08.814 ******* 2026-04-08 00:49:59.969972 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:59.969975 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:59.969979 | orchestrator | 2026-04-08 00:49:59.969985 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-08 00:49:59.969989 | orchestrator | Wednesday 08 April 2026 00:49:46 +0000 (0:00:06.241) 0:02:15.056 ******* 2026-04-08 00:49:59.969994 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:59.970000 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:59.970006 | orchestrator | 2026-04-08 00:49:59.970117 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-08 00:49:59.970135 | orchestrator | Wednesday 08 April 2026 00:49:53 +0000 (0:00:06.211) 0:02:21.267 ******* 2026-04-08 00:49:59.970142 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:59.970148 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:59.970154 | orchestrator | 2026-04-08 00:49:59.970159 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-08 00:49:59.970166 | orchestrator | Wednesday 08 April 2026 00:49:54 +0000 (0:00:01.130) 0:02:22.397 ******* 2026-04-08 00:49:59.970172 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:59.970178 | orchestrator | 2026-04-08 00:49:59.970184 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-08 00:49:59.970190 | orchestrator | Wednesday 08 April 2026 00:49:54 +0000 (0:00:00.154) 0:02:22.552 ******* 2026-04-08 00:49:59.970196 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.970202 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.970208 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.970214 | orchestrator | 2026-04-08 00:49:59.970221 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-08 00:49:59.970227 | orchestrator | Wednesday 08 April 2026 00:49:55 +0000 (0:00:00.876) 0:02:23.428 ******* 2026-04-08 00:49:59.970234 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.970238 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.970242 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:59.970245 | orchestrator | 2026-04-08 00:49:59.970249 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-08 00:49:59.970253 | orchestrator | Wednesday 08 April 2026 00:49:55 +0000 (0:00:00.706) 0:02:24.135 ******* 2026-04-08 00:49:59.970257 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.970260 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.970264 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.970268 | orchestrator | 2026-04-08 00:49:59.970272 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-08 00:49:59.970275 | orchestrator | Wednesday 08 April 2026 00:49:56 +0000 (0:00:00.944) 0:02:25.079 ******* 2026-04-08 00:49:59.970284 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:59.970288 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:59.970292 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:59.970296 | orchestrator | 2026-04-08 00:49:59.970299 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-08 00:49:59.970303 | orchestrator | Wednesday 08 April 2026 00:49:57 +0000 (0:00:00.660) 0:02:25.739 ******* 2026-04-08 00:49:59.970307 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.970311 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.970315 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.970318 | orchestrator | 2026-04-08 00:49:59.970322 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-08 00:49:59.970326 | orchestrator | Wednesday 08 April 2026 00:49:58 +0000 (0:00:00.837) 0:02:26.577 ******* 2026-04-08 00:49:59.970329 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:59.970333 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:59.970337 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:59.970341 | orchestrator | 2026-04-08 00:49:59.970344 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:49:59.970348 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-08 00:49:59.970353 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-08 00:49:59.970357 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-08 00:49:59.970361 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:49:59.970365 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:49:59.970369 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:49:59.970372 | orchestrator | 2026-04-08 00:49:59.970376 | orchestrator | 2026-04-08 00:49:59.970380 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:49:59.970384 | orchestrator | Wednesday 08 April 2026 00:49:59 +0000 (0:00:01.249) 0:02:27.827 ******* 2026-04-08 00:49:59.970387 | orchestrator | =============================================================================== 2026-04-08 00:49:59.970391 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 31.78s 2026-04-08 00:49:59.970395 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.86s 2026-04-08 00:49:59.970399 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.95s 2026-04-08 00:49:59.970402 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.75s 2026-04-08 00:49:59.970406 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.97s 2026-04-08 00:49:59.970410 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.21s 2026-04-08 00:49:59.970414 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.07s 2026-04-08 00:49:59.970423 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.97s 2026-04-08 00:49:59.970427 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.84s 2026-04-08 00:49:59.970430 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.50s 2026-04-08 00:49:59.970434 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.41s 2026-04-08 00:49:59.970443 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.05s 2026-04-08 00:49:59.970447 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.84s 2026-04-08 00:49:59.970454 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.80s 2026-04-08 00:49:59.970458 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.79s 2026-04-08 00:49:59.970462 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.76s 2026-04-08 00:49:59.970465 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.55s 2026-04-08 00:49:59.970469 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.42s 2026-04-08 00:49:59.970473 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.34s 2026-04-08 00:49:59.970477 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.34s 2026-04-08 00:49:59.970499 | orchestrator | 2026-04-08 00:49:59 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:49:59.970503 | orchestrator | 2026-04-08 00:49:59 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:49:59.970507 | orchestrator | 2026-04-08 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:03.024820 | orchestrator | 2026-04-08 00:50:03 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:03.027414 | orchestrator | 2026-04-08 00:50:03 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:03.027761 | orchestrator | 2026-04-08 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:06.064768 | orchestrator | 2026-04-08 00:50:06 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:06.067776 | orchestrator | 2026-04-08 00:50:06 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:06.067851 | orchestrator | 2026-04-08 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:09.121563 | orchestrator | 2026-04-08 00:50:09 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:09.123459 | orchestrator | 2026-04-08 00:50:09 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:09.123501 | orchestrator | 2026-04-08 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:12.180953 | orchestrator | 2026-04-08 00:50:12 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:12.181805 | orchestrator | 2026-04-08 00:50:12 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:12.181835 | orchestrator | 2026-04-08 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:15.226198 | orchestrator | 2026-04-08 00:50:15 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:15.226518 | orchestrator | 2026-04-08 00:50:15 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:15.226546 | orchestrator | 2026-04-08 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:18.270145 | orchestrator | 2026-04-08 00:50:18 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:18.271197 | orchestrator | 2026-04-08 00:50:18 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:18.271240 | orchestrator | 2026-04-08 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:21.327239 | orchestrator | 2026-04-08 00:50:21 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:21.327362 | orchestrator | 2026-04-08 00:50:21 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:21.327740 | orchestrator | 2026-04-08 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:24.375502 | orchestrator | 2026-04-08 00:50:24 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:24.376402 | orchestrator | 2026-04-08 00:50:24 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:24.376451 | orchestrator | 2026-04-08 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:27.430628 | orchestrator | 2026-04-08 00:50:27 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:27.431740 | orchestrator | 2026-04-08 00:50:27 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:27.432030 | orchestrator | 2026-04-08 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:30.482004 | orchestrator | 2026-04-08 00:50:30 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:30.485084 | orchestrator | 2026-04-08 00:50:30 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:30.485157 | orchestrator | 2026-04-08 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:33.533171 | orchestrator | 2026-04-08 00:50:33 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:33.533547 | orchestrator | 2026-04-08 00:50:33 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:33.533642 | orchestrator | 2026-04-08 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:36.571804 | orchestrator | 2026-04-08 00:50:36 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:36.572810 | orchestrator | 2026-04-08 00:50:36 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:36.572859 | orchestrator | 2026-04-08 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:39.607719 | orchestrator | 2026-04-08 00:50:39 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:39.611392 | orchestrator | 2026-04-08 00:50:39 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:39.611473 | orchestrator | 2026-04-08 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:42.659242 | orchestrator | 2026-04-08 00:50:42 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:42.662557 | orchestrator | 2026-04-08 00:50:42 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:42.662629 | orchestrator | 2026-04-08 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:45.708978 | orchestrator | 2026-04-08 00:50:45 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:45.711205 | orchestrator | 2026-04-08 00:50:45 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:45.711287 | orchestrator | 2026-04-08 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:48.755199 | orchestrator | 2026-04-08 00:50:48 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:48.756427 | orchestrator | 2026-04-08 00:50:48 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:48.756465 | orchestrator | 2026-04-08 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:51.797593 | orchestrator | 2026-04-08 00:50:51 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:51.799340 | orchestrator | 2026-04-08 00:50:51 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:51.799440 | orchestrator | 2026-04-08 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:54.843307 | orchestrator | 2026-04-08 00:50:54 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:54.843549 | orchestrator | 2026-04-08 00:50:54 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:54.843623 | orchestrator | 2026-04-08 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:57.885373 | orchestrator | 2026-04-08 00:50:57 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:50:57.886937 | orchestrator | 2026-04-08 00:50:57 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:50:57.886994 | orchestrator | 2026-04-08 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:00.931568 | orchestrator | 2026-04-08 00:51:00 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:00.932680 | orchestrator | 2026-04-08 00:51:00 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:00.932754 | orchestrator | 2026-04-08 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:03.983169 | orchestrator | 2026-04-08 00:51:03 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:03.983833 | orchestrator | 2026-04-08 00:51:03 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:03.984234 | orchestrator | 2026-04-08 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:07.029760 | orchestrator | 2026-04-08 00:51:07 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:07.031662 | orchestrator | 2026-04-08 00:51:07 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:07.031990 | orchestrator | 2026-04-08 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:10.078357 | orchestrator | 2026-04-08 00:51:10 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:10.080268 | orchestrator | 2026-04-08 00:51:10 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:10.080313 | orchestrator | 2026-04-08 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:13.120357 | orchestrator | 2026-04-08 00:51:13 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:13.120492 | orchestrator | 2026-04-08 00:51:13 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:13.120506 | orchestrator | 2026-04-08 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:16.153789 | orchestrator | 2026-04-08 00:51:16 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:16.154990 | orchestrator | 2026-04-08 00:51:16 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:16.155350 | orchestrator | 2026-04-08 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:19.194850 | orchestrator | 2026-04-08 00:51:19 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:19.196349 | orchestrator | 2026-04-08 00:51:19 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:19.196386 | orchestrator | 2026-04-08 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:22.243607 | orchestrator | 2026-04-08 00:51:22 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:22.246835 | orchestrator | 2026-04-08 00:51:22 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:22.246904 | orchestrator | 2026-04-08 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:25.280925 | orchestrator | 2026-04-08 00:51:25 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:25.282829 | orchestrator | 2026-04-08 00:51:25 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:25.282913 | orchestrator | 2026-04-08 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:28.328615 | orchestrator | 2026-04-08 00:51:28 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:28.332252 | orchestrator | 2026-04-08 00:51:28 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:28.333967 | orchestrator | 2026-04-08 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:31.369624 | orchestrator | 2026-04-08 00:51:31 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:31.371310 | orchestrator | 2026-04-08 00:51:31 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:31.373743 | orchestrator | 2026-04-08 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:34.415643 | orchestrator | 2026-04-08 00:51:34 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:34.416140 | orchestrator | 2026-04-08 00:51:34 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:34.416260 | orchestrator | 2026-04-08 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:37.450339 | orchestrator | 2026-04-08 00:51:37 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:37.450704 | orchestrator | 2026-04-08 00:51:37 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:37.450722 | orchestrator | 2026-04-08 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:40.496147 | orchestrator | 2026-04-08 00:51:40 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:40.496735 | orchestrator | 2026-04-08 00:51:40 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:40.496776 | orchestrator | 2026-04-08 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:43.529377 | orchestrator | 2026-04-08 00:51:43 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:43.530573 | orchestrator | 2026-04-08 00:51:43 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:43.530746 | orchestrator | 2026-04-08 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:46.574523 | orchestrator | 2026-04-08 00:51:46 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:46.575861 | orchestrator | 2026-04-08 00:51:46 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:46.575961 | orchestrator | 2026-04-08 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:49.624073 | orchestrator | 2026-04-08 00:51:49 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:49.624365 | orchestrator | 2026-04-08 00:51:49 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:49.624400 | orchestrator | 2026-04-08 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:52.665254 | orchestrator | 2026-04-08 00:51:52 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:52.667945 | orchestrator | 2026-04-08 00:51:52 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:52.668075 | orchestrator | 2026-04-08 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:55.707324 | orchestrator | 2026-04-08 00:51:55 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:55.709573 | orchestrator | 2026-04-08 00:51:55 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:55.709620 | orchestrator | 2026-04-08 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:58.750067 | orchestrator | 2026-04-08 00:51:58 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:51:58.750910 | orchestrator | 2026-04-08 00:51:58 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:51:58.751806 | orchestrator | 2026-04-08 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:01.804952 | orchestrator | 2026-04-08 00:52:01 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:01.806350 | orchestrator | 2026-04-08 00:52:01 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:01.806456 | orchestrator | 2026-04-08 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:04.850244 | orchestrator | 2026-04-08 00:52:04 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:04.853050 | orchestrator | 2026-04-08 00:52:04 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:04.853348 | orchestrator | 2026-04-08 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:07.903058 | orchestrator | 2026-04-08 00:52:07 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:07.910493 | orchestrator | 2026-04-08 00:52:07 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:07.910588 | orchestrator | 2026-04-08 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:10.956208 | orchestrator | 2026-04-08 00:52:10 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:10.958223 | orchestrator | 2026-04-08 00:52:10 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:10.958543 | orchestrator | 2026-04-08 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:14.022931 | orchestrator | 2026-04-08 00:52:14 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:14.024030 | orchestrator | 2026-04-08 00:52:14 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:14.024269 | orchestrator | 2026-04-08 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:17.061020 | orchestrator | 2026-04-08 00:52:17 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:17.061919 | orchestrator | 2026-04-08 00:52:17 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:17.061970 | orchestrator | 2026-04-08 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:20.105344 | orchestrator | 2026-04-08 00:52:20 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:20.106182 | orchestrator | 2026-04-08 00:52:20 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:20.106247 | orchestrator | 2026-04-08 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:23.149303 | orchestrator | 2026-04-08 00:52:23 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:23.150389 | orchestrator | 2026-04-08 00:52:23 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:23.150440 | orchestrator | 2026-04-08 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:26.192847 | orchestrator | 2026-04-08 00:52:26 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:26.194228 | orchestrator | 2026-04-08 00:52:26 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:26.194282 | orchestrator | 2026-04-08 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:29.234536 | orchestrator | 2026-04-08 00:52:29 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:29.236706 | orchestrator | 2026-04-08 00:52:29 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:29.236865 | orchestrator | 2026-04-08 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:32.277642 | orchestrator | 2026-04-08 00:52:32 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:32.279509 | orchestrator | 2026-04-08 00:52:32 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:32.279576 | orchestrator | 2026-04-08 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:35.327504 | orchestrator | 2026-04-08 00:52:35 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:35.328536 | orchestrator | 2026-04-08 00:52:35 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:35.328576 | orchestrator | 2026-04-08 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:38.371048 | orchestrator | 2026-04-08 00:52:38 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:38.372044 | orchestrator | 2026-04-08 00:52:38 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:38.372086 | orchestrator | 2026-04-08 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:41.412171 | orchestrator | 2026-04-08 00:52:41 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:41.414645 | orchestrator | 2026-04-08 00:52:41 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:41.414690 | orchestrator | 2026-04-08 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:44.445538 | orchestrator | 2026-04-08 00:52:44 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:44.446197 | orchestrator | 2026-04-08 00:52:44 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:44.446238 | orchestrator | 2026-04-08 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:47.478949 | orchestrator | 2026-04-08 00:52:47 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:47.482588 | orchestrator | 2026-04-08 00:52:47 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state STARTED 2026-04-08 00:52:47.482676 | orchestrator | 2026-04-08 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:50.513271 | orchestrator | 2026-04-08 00:52:50 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:50.517242 | orchestrator | 2026-04-08 00:52:50 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:52:50.517738 | orchestrator | 2026-04-08 00:52:50 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:52:50.523553 | orchestrator | 2026-04-08 00:52:50 | INFO  | Task 3d27ed74-b481-4738-8949-ca471472a82a is in state SUCCESS 2026-04-08 00:52:50.526514 | orchestrator | 2026-04-08 00:52:50.526591 | orchestrator | 2026-04-08 00:52:50.526599 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:52:50.526620 | orchestrator | 2026-04-08 00:52:50.526628 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:52:50.526698 | orchestrator | Wednesday 08 April 2026 00:46:27 +0000 (0:00:00.402) 0:00:00.402 ******* 2026-04-08 00:52:50.526706 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.526714 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.526721 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.526728 | orchestrator | 2026-04-08 00:52:50.526735 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:52:50.526757 | orchestrator | Wednesday 08 April 2026 00:46:28 +0000 (0:00:00.343) 0:00:00.746 ******* 2026-04-08 00:52:50.526764 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-08 00:52:50.526771 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-08 00:52:50.526800 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-08 00:52:50.526822 | orchestrator | 2026-04-08 00:52:50.526828 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-08 00:52:50.526834 | orchestrator | 2026-04-08 00:52:50.526840 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-08 00:52:50.526846 | orchestrator | Wednesday 08 April 2026 00:46:28 +0000 (0:00:00.389) 0:00:01.135 ******* 2026-04-08 00:52:50.526853 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.526879 | orchestrator | 2026-04-08 00:52:50.526885 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-08 00:52:50.526892 | orchestrator | Wednesday 08 April 2026 00:46:29 +0000 (0:00:00.839) 0:00:01.975 ******* 2026-04-08 00:52:50.526933 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.526954 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.526960 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.526966 | orchestrator | 2026-04-08 00:52:50.527014 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-08 00:52:50.527020 | orchestrator | Wednesday 08 April 2026 00:46:30 +0000 (0:00:01.146) 0:00:03.122 ******* 2026-04-08 00:52:50.527026 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.527032 | orchestrator | 2026-04-08 00:52:50.527038 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-08 00:52:50.527044 | orchestrator | Wednesday 08 April 2026 00:46:31 +0000 (0:00:00.830) 0:00:03.953 ******* 2026-04-08 00:52:50.527050 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.527056 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.527062 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.527068 | orchestrator | 2026-04-08 00:52:50.527075 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-08 00:52:50.527081 | orchestrator | Wednesday 08 April 2026 00:46:32 +0000 (0:00:00.836) 0:00:04.789 ******* 2026-04-08 00:52:50.527088 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-08 00:52:50.527095 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-08 00:52:50.527104 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-08 00:52:50.527112 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-08 00:52:50.527197 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-08 00:52:50.527207 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-08 00:52:50.527212 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-08 00:52:50.527218 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-08 00:52:50.527224 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-08 00:52:50.527230 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-08 00:52:50.527236 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-08 00:52:50.527242 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-08 00:52:50.527248 | orchestrator | 2026-04-08 00:52:50.527253 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-08 00:52:50.527259 | orchestrator | Wednesday 08 April 2026 00:46:36 +0000 (0:00:04.456) 0:00:09.245 ******* 2026-04-08 00:52:50.527265 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-08 00:52:50.527273 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-08 00:52:50.527279 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-08 00:52:50.527321 | orchestrator | 2026-04-08 00:52:50.527328 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-08 00:52:50.527335 | orchestrator | Wednesday 08 April 2026 00:46:37 +0000 (0:00:01.177) 0:00:10.423 ******* 2026-04-08 00:52:50.527340 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-08 00:52:50.527344 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-08 00:52:50.527349 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-08 00:52:50.527353 | orchestrator | 2026-04-08 00:52:50.527357 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-08 00:52:50.527376 | orchestrator | Wednesday 08 April 2026 00:46:39 +0000 (0:00:01.509) 0:00:11.932 ******* 2026-04-08 00:52:50.527381 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-08 00:52:50.527386 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.527407 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-08 00:52:50.527412 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.527416 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-08 00:52:50.527421 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.527425 | orchestrator | 2026-04-08 00:52:50.527458 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-08 00:52:50.527463 | orchestrator | Wednesday 08 April 2026 00:46:40 +0000 (0:00:01.086) 0:00:13.019 ******* 2026-04-08 00:52:50.527476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-08 00:52:50.527487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-08 00:52:50.527499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-08 00:52:50.527504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:52:50.527509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:52:50.527518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:52:50.527527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:52:50.527532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:52:50.527537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:52:50.527545 | orchestrator | 2026-04-08 00:52:50.527550 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-08 00:52:50.527554 | orchestrator | Wednesday 08 April 2026 00:46:42 +0000 (0:00:01.653) 0:00:14.673 ******* 2026-04-08 00:52:50.527559 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.527563 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.527568 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.527572 | orchestrator | 2026-04-08 00:52:50.527592 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-08 00:52:50.527597 | orchestrator | Wednesday 08 April 2026 00:46:43 +0000 (0:00:01.220) 0:00:15.893 ******* 2026-04-08 00:52:50.527602 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-08 00:52:50.527606 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-08 00:52:50.527610 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-08 00:52:50.527615 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-08 00:52:50.527619 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-08 00:52:50.527623 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-08 00:52:50.527628 | orchestrator | 2026-04-08 00:52:50.527632 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-08 00:52:50.527636 | orchestrator | Wednesday 08 April 2026 00:46:46 +0000 (0:00:02.975) 0:00:18.869 ******* 2026-04-08 00:52:50.527641 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.527646 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.527726 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.527734 | orchestrator | 2026-04-08 00:52:50.527740 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-08 00:52:50.527748 | orchestrator | Wednesday 08 April 2026 00:46:47 +0000 (0:00:01.179) 0:00:20.048 ******* 2026-04-08 00:52:50.527802 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.527826 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.527833 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.527840 | orchestrator | 2026-04-08 00:52:50.527846 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-08 00:52:50.527852 | orchestrator | Wednesday 08 April 2026 00:46:49 +0000 (0:00:02.058) 0:00:22.106 ******* 2026-04-08 00:52:50.527860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.527875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.527887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.528016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.528027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__86bff6b072ff50646f342edf4e52f0da92348348', '__omit_place_holder__86bff6b072ff50646f342edf4e52f0da92348348'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-08 00:52:50.528034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.528041 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.528048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.528055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__86bff6b072ff50646f342edf4e52f0da92348348', '__omit_place_holder__86bff6b072ff50646f342edf4e52f0da92348348'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-08 00:52:50.528061 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.528074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.528095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.528102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.528108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__86bff6b072ff50646f342edf4e52f0da92348348', '__omit_place_holder__86bff6b072ff50646f342edf4e52f0da92348348'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-08 00:52:50.528115 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.528120 | orchestrator | 2026-04-08 00:52:50.528127 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-08 00:52:50.528152 | orchestrator | Wednesday 08 April 2026 00:46:50 +0000 (0:00:00.698) 0:00:22.805 ******* 2026-04-08 00:52:50.528158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-08 00:52:50.528165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-08 00:52:50.528185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-08 00:52:50.528195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:52:50.528202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:52:50.528209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.528215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__86bff6b072ff50646f342edf4e52f0da92348348', '__omit_place_holder__86bff6b072ff50646f342edf4e52f0da92348348'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-08 00:52:50.528222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.528228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__86bff6b072ff50646f342edf4e52f0da92348348', '__omit_place_holder__86bff6b072ff50646f342edf4e52f0da92348348'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-08 00:52:50.528253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:52:50.528261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.528268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__86bff6b072ff50646f342edf4e52f0da92348348', '__omit_place_holder__86bff6b072ff50646f342edf4e52f0da92348348'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-08 00:52:50.528274 | orchestrator | 2026-04-08 00:52:50.528280 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-08 00:52:50.528287 | orchestrator | Wednesday 08 April 2026 00:46:55 +0000 (0:00:05.178) 0:00:27.983 ******* 2026-04-08 00:52:50.528293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-08 00:52:50.528300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-08 00:52:50.528306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-08 00:52:50.528324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:52:50.528335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:52:50.528341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:52:50.528348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:52:50.528354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:52:50.528361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:52:50.528368 | orchestrator | 2026-04-08 00:52:50.528380 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-08 00:52:50.528386 | orchestrator | Wednesday 08 April 2026 00:46:59 +0000 (0:00:03.767) 0:00:31.751 ******* 2026-04-08 00:52:50.528393 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-08 00:52:50.528400 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-08 00:52:50.528406 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-08 00:52:50.528428 | orchestrator | 2026-04-08 00:52:50.528436 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-08 00:52:50.528449 | orchestrator | Wednesday 08 April 2026 00:47:01 +0000 (0:00:02.187) 0:00:33.939 ******* 2026-04-08 00:52:50.528484 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-08 00:52:50.528491 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-08 00:52:50.528497 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-08 00:52:50.528504 | orchestrator | 2026-04-08 00:52:50.529482 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-08 00:52:50.529525 | orchestrator | Wednesday 08 April 2026 00:47:05 +0000 (0:00:04.412) 0:00:38.351 ******* 2026-04-08 00:52:50.529529 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.529534 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.529538 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.529542 | orchestrator | 2026-04-08 00:52:50.529546 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-08 00:52:50.529550 | orchestrator | Wednesday 08 April 2026 00:47:08 +0000 (0:00:02.777) 0:00:41.129 ******* 2026-04-08 00:52:50.529558 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-08 00:52:50.529564 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-08 00:52:50.529568 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-08 00:52:50.529572 | orchestrator | 2026-04-08 00:52:50.529575 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-08 00:52:50.529579 | orchestrator | Wednesday 08 April 2026 00:47:11 +0000 (0:00:02.550) 0:00:43.679 ******* 2026-04-08 00:52:50.529583 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-08 00:52:50.529588 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-08 00:52:50.529591 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-08 00:52:50.529595 | orchestrator | 2026-04-08 00:52:50.529613 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-08 00:52:50.529617 | orchestrator | Wednesday 08 April 2026 00:47:13 +0000 (0:00:02.760) 0:00:46.440 ******* 2026-04-08 00:52:50.529628 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-08 00:52:50.529632 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-08 00:52:50.529636 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-08 00:52:50.529640 | orchestrator | 2026-04-08 00:52:50.529644 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-08 00:52:50.529648 | orchestrator | Wednesday 08 April 2026 00:47:15 +0000 (0:00:01.653) 0:00:48.093 ******* 2026-04-08 00:52:50.529651 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-08 00:52:50.529656 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-08 00:52:50.529668 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-08 00:52:50.529672 | orchestrator | 2026-04-08 00:52:50.529676 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-08 00:52:50.529679 | orchestrator | Wednesday 08 April 2026 00:47:19 +0000 (0:00:03.584) 0:00:51.678 ******* 2026-04-08 00:52:50.529684 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.529687 | orchestrator | 2026-04-08 00:52:50.529691 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-08 00:52:50.529695 | orchestrator | Wednesday 08 April 2026 00:47:19 +0000 (0:00:00.880) 0:00:52.558 ******* 2026-04-08 00:52:50.529700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-08 00:52:50.529705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-08 00:52:50.529716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-08 00:52:50.529724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:52:50.529728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:52:50.529737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:52:50.529741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:52:50.529746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:52:50.529750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:52:50.529754 | orchestrator | 2026-04-08 00:52:50.529758 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-08 00:52:50.529762 | orchestrator | Wednesday 08 April 2026 00:47:23 +0000 (0:00:03.824) 0:00:56.383 ******* 2026-04-08 00:52:50.529770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.529776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.529780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.529788 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.529792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.529796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.529800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.529806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.529813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.529817 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.529821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.529828 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.529832 | orchestrator | 2026-04-08 00:52:50.529836 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-08 00:52:50.529840 | orchestrator | Wednesday 08 April 2026 00:47:24 +0000 (0:00:00.791) 0:00:57.174 ******* 2026-04-08 00:52:50.529844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.529848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.529852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.529856 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.529860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.529867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.529873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.529880 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.529884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.529888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.529892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.529896 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.529900 | orchestrator | 2026-04-08 00:52:50.529904 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-08 00:52:50.529908 | orchestrator | Wednesday 08 April 2026 00:47:26 +0000 (0:00:01.859) 0:00:59.034 ******* 2026-04-08 00:52:50.529911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.529918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.529925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.529933 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.529936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.529940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.529944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.529948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.529952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.529956 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.529963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.529967 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.529970 | orchestrator | 2026-04-08 00:52:50.529977 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-08 00:52:50.529981 | orchestrator | Wednesday 08 April 2026 00:47:27 +0000 (0:00:01.040) 0:01:00.074 ******* 2026-04-08 00:52:50.529988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.529992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.529996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.530000 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.530004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.530008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.530051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.530058 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.530069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.530075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.530080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.530085 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.530089 | orchestrator | 2026-04-08 00:52:50.530094 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-08 00:52:50.530098 | orchestrator | Wednesday 08 April 2026 00:47:28 +0000 (0:00:00.945) 0:01:01.020 ******* 2026-04-08 00:52:50.530103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.530157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.530168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.530174 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.530192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.530203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.530209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.530215 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.530220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.530227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.530234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.530240 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.530246 | orchestrator | 2026-04-08 00:52:50.530252 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-08 00:52:50.530259 | orchestrator | Wednesday 08 April 2026 00:47:30 +0000 (0:00:01.806) 0:01:02.826 ******* 2026-04-08 00:52:50.530275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.530290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.530297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.530302 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.530309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.530315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.530322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.530328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.530344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.530361 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.530372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.530378 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.530385 | orchestrator | 2026-04-08 00:52:50.530391 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-08 00:52:50.530397 | orchestrator | Wednesday 08 April 2026 00:47:31 +0000 (0:00:01.073) 0:01:03.900 ******* 2026-04-08 00:52:50.530403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.530408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.530414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.530420 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.530427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.530443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.530457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.530463 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.530472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.530478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.530485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.530490 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.530496 | orchestrator | 2026-04-08 00:52:50.530502 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-08 00:52:50.530508 | orchestrator | Wednesday 08 April 2026 00:47:32 +0000 (0:00:00.840) 0:01:04.740 ******* 2026-04-08 00:52:50.530514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.530526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.530532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.530538 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.530553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.530557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.530561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.530565 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.530569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-08 00:52:50.530576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:52:50.530580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:52:50.530584 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.530588 | orchestrator | 2026-04-08 00:52:50.530592 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-08 00:52:50.530595 | orchestrator | Wednesday 08 April 2026 00:47:33 +0000 (0:00:01.896) 0:01:06.637 ******* 2026-04-08 00:52:50.530599 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-08 00:52:50.530604 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-08 00:52:50.530610 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-08 00:52:50.530614 | orchestrator | 2026-04-08 00:52:50.530618 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-08 00:52:50.530621 | orchestrator | Wednesday 08 April 2026 00:47:35 +0000 (0:00:01.863) 0:01:08.500 ******* 2026-04-08 00:52:50.530625 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-08 00:52:50.530629 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-08 00:52:50.530636 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-08 00:52:50.530640 | orchestrator | 2026-04-08 00:52:50.530644 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-08 00:52:50.530648 | orchestrator | Wednesday 08 April 2026 00:47:37 +0000 (0:00:01.555) 0:01:10.056 ******* 2026-04-08 00:52:50.530651 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-08 00:52:50.530656 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-08 00:52:50.530659 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-08 00:52:50.530663 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-08 00:52:50.530667 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.530671 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-08 00:52:50.530675 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.530678 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-08 00:52:50.530686 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.530690 | orchestrator | 2026-04-08 00:52:50.530694 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-08 00:52:50.530698 | orchestrator | Wednesday 08 April 2026 00:47:38 +0000 (0:00:01.401) 0:01:11.457 ******* 2026-04-08 00:52:50.530702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-08 00:52:50.530706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-08 00:52:50.530710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-08 00:52:50.530717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:52:50.530724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:52:50.530728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:52:50.530735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:52:50.530739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:52:50.530743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:52:50.530747 | orchestrator | 2026-04-08 00:52:50.530751 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-08 00:52:50.530755 | orchestrator | Wednesday 08 April 2026 00:47:41 +0000 (0:00:02.405) 0:01:13.863 ******* 2026-04-08 00:52:50.530759 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.530763 | orchestrator | 2026-04-08 00:52:50.530766 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-08 00:52:50.530770 | orchestrator | Wednesday 08 April 2026 00:47:41 +0000 (0:00:00.557) 0:01:14.421 ******* 2026-04-08 00:52:50.530775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-08 00:52:50.530786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.530790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.530798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.530802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-08 00:52:50.530806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-08 00:52:50.530810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.530816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.530825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.530832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.530836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.530840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.530844 | orchestrator | 2026-04-08 00:52:50.530848 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-08 00:52:50.530852 | orchestrator | Wednesday 08 April 2026 00:47:45 +0000 (0:00:04.103) 0:01:18.524 ******* 2026-04-08 00:52:50.530856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-08 00:52:50.530864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.530870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.530878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.530882 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.530886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-08 00:52:50.530890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.530894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.530899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.530903 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.530913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-08 00:52:50.530921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.530926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.530930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.530935 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.530939 | orchestrator | 2026-04-08 00:52:50.530943 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-08 00:52:50.530948 | orchestrator | Wednesday 08 April 2026 00:47:46 +0000 (0:00:00.583) 0:01:19.108 ******* 2026-04-08 00:52:50.530953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-08 00:52:50.530959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-08 00:52:50.530964 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.530969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-08 00:52:50.530973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-08 00:52:50.530977 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.530982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-08 00:52:50.530990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-08 00:52:50.530994 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.530999 | orchestrator | 2026-04-08 00:52:50.531006 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-08 00:52:50.531011 | orchestrator | Wednesday 08 April 2026 00:47:47 +0000 (0:00:00.967) 0:01:20.075 ******* 2026-04-08 00:52:50.531015 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.531020 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.531024 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.531029 | orchestrator | 2026-04-08 00:52:50.531033 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-08 00:52:50.531037 | orchestrator | Wednesday 08 April 2026 00:47:49 +0000 (0:00:02.355) 0:01:22.430 ******* 2026-04-08 00:52:50.531042 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.531046 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.531054 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.531058 | orchestrator | 2026-04-08 00:52:50.531062 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-08 00:52:50.531067 | orchestrator | Wednesday 08 April 2026 00:47:51 +0000 (0:00:01.770) 0:01:24.201 ******* 2026-04-08 00:52:50.531071 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.531075 | orchestrator | 2026-04-08 00:52:50.531080 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-08 00:52:50.531084 | orchestrator | Wednesday 08 April 2026 00:47:52 +0000 (0:00:00.603) 0:01:24.805 ******* 2026-04-08 00:52:50.531089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.531094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.531098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.531199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531217 | orchestrator | 2026-04-08 00:52:50.531222 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-08 00:52:50.531226 | orchestrator | Wednesday 08 April 2026 00:47:55 +0000 (0:00:03.224) 0:01:28.029 ******* 2026-04-08 00:52:50.531235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.531245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531258 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.531264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.531274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531294 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.531309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.531315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531327 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.531332 | orchestrator | 2026-04-08 00:52:50.531338 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-08 00:52:50.531343 | orchestrator | Wednesday 08 April 2026 00:47:56 +0000 (0:00:01.203) 0:01:29.232 ******* 2026-04-08 00:52:50.531349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-08 00:52:50.531357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-08 00:52:50.531367 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.531373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-08 00:52:50.531379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-08 00:52:50.531385 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.531391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-08 00:52:50.531397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-08 00:52:50.531402 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.531408 | orchestrator | 2026-04-08 00:52:50.531413 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-08 00:52:50.531419 | orchestrator | Wednesday 08 April 2026 00:47:57 +0000 (0:00:00.950) 0:01:30.183 ******* 2026-04-08 00:52:50.531425 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.531431 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.531436 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.531442 | orchestrator | 2026-04-08 00:52:50.531448 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-08 00:52:50.531454 | orchestrator | Wednesday 08 April 2026 00:47:58 +0000 (0:00:01.271) 0:01:31.454 ******* 2026-04-08 00:52:50.531460 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.531465 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.531471 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.531477 | orchestrator | 2026-04-08 00:52:50.531488 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-08 00:52:50.531494 | orchestrator | Wednesday 08 April 2026 00:48:00 +0000 (0:00:01.970) 0:01:33.425 ******* 2026-04-08 00:52:50.531500 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.531506 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.531512 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.531518 | orchestrator | 2026-04-08 00:52:50.531524 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-08 00:52:50.531530 | orchestrator | Wednesday 08 April 2026 00:48:01 +0000 (0:00:00.301) 0:01:33.727 ******* 2026-04-08 00:52:50.531543 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.531550 | orchestrator | 2026-04-08 00:52:50.531556 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-08 00:52:50.531561 | orchestrator | Wednesday 08 April 2026 00:48:01 +0000 (0:00:00.897) 0:01:34.624 ******* 2026-04-08 00:52:50.531568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-08 00:52:50.531581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-08 00:52:50.531587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-08 00:52:50.531594 | orchestrator | 2026-04-08 00:52:50.531600 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-08 00:52:50.531607 | orchestrator | Wednesday 08 April 2026 00:48:04 +0000 (0:00:02.920) 0:01:37.545 ******* 2026-04-08 00:52:50.531617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-08 00:52:50.531623 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.531632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-08 00:52:50.531638 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.531644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-08 00:52:50.531655 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.531661 | orchestrator | 2026-04-08 00:52:50.531666 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-08 00:52:50.531672 | orchestrator | Wednesday 08 April 2026 00:48:07 +0000 (0:00:02.717) 0:01:40.262 ******* 2026-04-08 00:52:50.531679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-08 00:52:50.531687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-08 00:52:50.531695 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.531701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-08 00:52:50.531707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-08 00:52:50.531713 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.531723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-08 00:52:50.531732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-08 00:52:50.531738 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.531744 | orchestrator | 2026-04-08 00:52:50.531750 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-08 00:52:50.531760 | orchestrator | Wednesday 08 April 2026 00:48:09 +0000 (0:00:02.284) 0:01:42.547 ******* 2026-04-08 00:52:50.531766 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.531772 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.531778 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.531783 | orchestrator | 2026-04-08 00:52:50.531790 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-08 00:52:50.531796 | orchestrator | Wednesday 08 April 2026 00:48:10 +0000 (0:00:00.548) 0:01:43.095 ******* 2026-04-08 00:52:50.531802 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.531808 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.531814 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.531820 | orchestrator | 2026-04-08 00:52:50.531826 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-08 00:52:50.531832 | orchestrator | Wednesday 08 April 2026 00:48:11 +0000 (0:00:01.113) 0:01:44.209 ******* 2026-04-08 00:52:50.531837 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.531843 | orchestrator | 2026-04-08 00:52:50.531849 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-08 00:52:50.531855 | orchestrator | Wednesday 08 April 2026 00:48:12 +0000 (0:00:00.806) 0:01:45.016 ******* 2026-04-08 00:52:50.531861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.531868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.531965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.531988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.531999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532039 | orchestrator | 2026-04-08 00:52:50.532045 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-08 00:52:50.532051 | orchestrator | Wednesday 08 April 2026 00:48:16 +0000 (0:00:04.479) 0:01:49.495 ******* 2026-04-08 00:52:50.532057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.532064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532100 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.532105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.532112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532157 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.532171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.532177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532195 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.532201 | orchestrator | 2026-04-08 00:52:50.532207 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-08 00:52:50.532214 | orchestrator | Wednesday 08 April 2026 00:48:17 +0000 (0:00:00.925) 0:01:50.420 ******* 2026-04-08 00:52:50.532220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-08 00:52:50.532231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-08 00:52:50.532238 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.532244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-08 00:52:50.532250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-08 00:52:50.532260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-08 00:52:50.532266 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.532273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-08 00:52:50.532279 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.532285 | orchestrator | 2026-04-08 00:52:50.532291 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-08 00:52:50.532304 | orchestrator | Wednesday 08 April 2026 00:48:18 +0000 (0:00:00.974) 0:01:51.395 ******* 2026-04-08 00:52:50.532311 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.532317 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.532323 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.532330 | orchestrator | 2026-04-08 00:52:50.532336 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-08 00:52:50.532342 | orchestrator | Wednesday 08 April 2026 00:48:19 +0000 (0:00:01.259) 0:01:52.654 ******* 2026-04-08 00:52:50.532348 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.532354 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.532360 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.532366 | orchestrator | 2026-04-08 00:52:50.532371 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-08 00:52:50.532377 | orchestrator | Wednesday 08 April 2026 00:48:22 +0000 (0:00:02.112) 0:01:54.767 ******* 2026-04-08 00:52:50.532383 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.532388 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.532394 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.532400 | orchestrator | 2026-04-08 00:52:50.532406 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-08 00:52:50.532412 | orchestrator | Wednesday 08 April 2026 00:48:22 +0000 (0:00:00.276) 0:01:55.044 ******* 2026-04-08 00:52:50.532418 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.532424 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.532430 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.532435 | orchestrator | 2026-04-08 00:52:50.532441 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-08 00:52:50.532447 | orchestrator | Wednesday 08 April 2026 00:48:22 +0000 (0:00:00.251) 0:01:55.295 ******* 2026-04-08 00:52:50.532454 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.532460 | orchestrator | 2026-04-08 00:52:50.532466 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-08 00:52:50.532472 | orchestrator | Wednesday 08 April 2026 00:48:23 +0000 (0:00:00.828) 0:01:56.124 ******* 2026-04-08 00:52:50.532478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 00:52:50.532490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 00:52:50.532497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 00:52:50.532550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 00:52:50.532559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 00:52:50.532606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 00:52:50.532616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532652 | orchestrator | 2026-04-08 00:52:50.532658 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-08 00:52:50.532665 | orchestrator | Wednesday 08 April 2026 00:48:27 +0000 (0:00:04.011) 0:02:00.136 ******* 2026-04-08 00:52:50.532671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 00:52:50.532683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 00:52:50.532690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532725 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.532735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 00:52:50.532744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 00:52:50.532750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532786 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.532800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 00:52:50.532806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 00:52:50.532817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.532852 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.532858 | orchestrator | 2026-04-08 00:52:50.532864 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-08 00:52:50.532873 | orchestrator | Wednesday 08 April 2026 00:48:28 +0000 (0:00:01.081) 0:02:01.217 ******* 2026-04-08 00:52:50.532880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-08 00:52:50.532892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-08 00:52:50.532900 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.532906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-08 00:52:50.532912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-08 00:52:50.532919 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.532925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-08 00:52:50.532931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-08 00:52:50.532937 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.532943 | orchestrator | 2026-04-08 00:52:50.532950 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-08 00:52:50.532956 | orchestrator | Wednesday 08 April 2026 00:48:30 +0000 (0:00:02.111) 0:02:03.328 ******* 2026-04-08 00:52:50.532962 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.532968 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.532974 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.532980 | orchestrator | 2026-04-08 00:52:50.532986 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-08 00:52:50.532992 | orchestrator | Wednesday 08 April 2026 00:48:32 +0000 (0:00:01.488) 0:02:04.817 ******* 2026-04-08 00:52:50.532998 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.533004 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.533010 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.533016 | orchestrator | 2026-04-08 00:52:50.533022 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-08 00:52:50.533028 | orchestrator | Wednesday 08 April 2026 00:48:34 +0000 (0:00:02.294) 0:02:07.111 ******* 2026-04-08 00:52:50.533034 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.533040 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.533045 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.533050 | orchestrator | 2026-04-08 00:52:50.533056 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-08 00:52:50.533061 | orchestrator | Wednesday 08 April 2026 00:48:34 +0000 (0:00:00.325) 0:02:07.437 ******* 2026-04-08 00:52:50.533067 | orchestrator | included: glance for testbed-node-1, testbed-node-2, testbed-node-0 2026-04-08 00:52:50.533073 | orchestrator | 2026-04-08 00:52:50.533079 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-08 00:52:50.533085 | orchestrator | Wednesday 08 April 2026 00:48:35 +0000 (0:00:00.992) 0:02:08.430 ******* 2026-04-08 00:52:50.533104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 00:52:50.533120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-08 00:52:50.533149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 00:52:50.535882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 00:52:50.536018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-08 00:52:50.536064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-08 00:52:50.536074 | orchestrator | 2026-04-08 00:52:50.536082 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-08 00:52:50.536090 | orchestrator | Wednesday 08 April 2026 00:48:41 +0000 (0:00:05.283) 0:02:13.713 ******* 2026-04-08 00:52:50.536097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-08 00:52:50.536113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-08 00:52:50.536125 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.536149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-08 00:52:50.536160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-08 00:52:50.536235 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.536249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-08 00:52:50.536257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-08 00:52:50.536287 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.536294 | orchestrator | 2026-04-08 00:52:50.536300 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-08 00:52:50.536307 | orchestrator | Wednesday 08 April 2026 00:48:45 +0000 (0:00:04.377) 0:02:18.091 ******* 2026-04-08 00:52:50.536318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-08 00:52:50.536346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-08 00:52:50.536354 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.536361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-08 00:52:50.536368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-08 00:52:50.536429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-08 00:52:50.536441 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.536448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-08 00:52:50.536455 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.536461 | orchestrator | 2026-04-08 00:52:50.536468 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-08 00:52:50.536474 | orchestrator | Wednesday 08 April 2026 00:48:50 +0000 (0:00:05.511) 0:02:23.603 ******* 2026-04-08 00:52:50.536481 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.536488 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.536501 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.536508 | orchestrator | 2026-04-08 00:52:50.536514 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-08 00:52:50.536521 | orchestrator | Wednesday 08 April 2026 00:48:52 +0000 (0:00:01.416) 0:02:25.020 ******* 2026-04-08 00:52:50.536527 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.536534 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.536540 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.536547 | orchestrator | 2026-04-08 00:52:50.536553 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-08 00:52:50.536559 | orchestrator | Wednesday 08 April 2026 00:48:54 +0000 (0:00:02.060) 0:02:27.080 ******* 2026-04-08 00:52:50.536566 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.536601 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.536607 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.536615 | orchestrator | 2026-04-08 00:52:50.536621 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-08 00:52:50.536627 | orchestrator | Wednesday 08 April 2026 00:48:54 +0000 (0:00:00.329) 0:02:27.410 ******* 2026-04-08 00:52:50.536637 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.536644 | orchestrator | 2026-04-08 00:52:50.536650 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-08 00:52:50.536657 | orchestrator | Wednesday 08 April 2026 00:48:55 +0000 (0:00:01.067) 0:02:28.477 ******* 2026-04-08 00:52:50.536671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 00:52:50.536679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 00:52:50.536686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 00:52:50.536701 | orchestrator | 2026-04-08 00:52:50.536707 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-08 00:52:50.536714 | orchestrator | Wednesday 08 April 2026 00:48:59 +0000 (0:00:03.476) 0:02:31.954 ******* 2026-04-08 00:52:50.536720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-08 00:52:50.536727 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.536734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-08 00:52:50.536741 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.536752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-08 00:52:50.536759 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.536765 | orchestrator | 2026-04-08 00:52:50.536775 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-08 00:52:50.536782 | orchestrator | Wednesday 08 April 2026 00:48:59 +0000 (0:00:00.430) 0:02:32.384 ******* 2026-04-08 00:52:50.536790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-08 00:52:50.536799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-08 00:52:50.536806 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.536813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-08 00:52:50.536824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-08 00:52:50.536831 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.536838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-08 00:52:50.536844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-08 00:52:50.536851 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.536858 | orchestrator | 2026-04-08 00:52:50.536864 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-08 00:52:50.536871 | orchestrator | Wednesday 08 April 2026 00:49:00 +0000 (0:00:00.914) 0:02:33.299 ******* 2026-04-08 00:52:50.536878 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.536884 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.536891 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.536897 | orchestrator | 2026-04-08 00:52:50.536904 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-08 00:52:50.536911 | orchestrator | Wednesday 08 April 2026 00:49:02 +0000 (0:00:01.459) 0:02:34.758 ******* 2026-04-08 00:52:50.536917 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.536924 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.536931 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.536937 | orchestrator | 2026-04-08 00:52:50.536943 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-08 00:52:50.536950 | orchestrator | Wednesday 08 April 2026 00:49:04 +0000 (0:00:02.059) 0:02:36.818 ******* 2026-04-08 00:52:50.536956 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.536963 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.536969 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.536975 | orchestrator | 2026-04-08 00:52:50.536982 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-08 00:52:50.536990 | orchestrator | Wednesday 08 April 2026 00:49:04 +0000 (0:00:00.336) 0:02:37.154 ******* 2026-04-08 00:52:50.536996 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.537002 | orchestrator | 2026-04-08 00:52:50.537009 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-08 00:52:50.537015 | orchestrator | Wednesday 08 April 2026 00:49:05 +0000 (0:00:01.218) 0:02:38.373 ******* 2026-04-08 00:52:50.537033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:52:50.537047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:52:50.537065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:52:50.537078 | orchestrator | 2026-04-08 00:52:50.537085 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-08 00:52:50.537092 | orchestrator | Wednesday 08 April 2026 00:49:09 +0000 (0:00:03.790) 0:02:42.163 ******* 2026-04-08 00:52:50.537099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:52:50.537107 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.537322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:52:50.537337 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.537344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:52:50.537356 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.537363 | orchestrator | 2026-04-08 00:52:50.537370 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-08 00:52:50.537376 | orchestrator | Wednesday 08 April 2026 00:49:10 +0000 (0:00:00.647) 0:02:42.811 ******* 2026-04-08 00:52:50.537388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-08 00:52:50.537397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-08 00:52:50.537430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-08 00:52:50.537438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-08 00:52:50.537448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-08 00:52:50.537455 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.537461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-08 00:52:50.537469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-08 00:52:50.537476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-08 00:52:50.537484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-08 00:52:50.537491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-08 00:52:50.537498 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.537505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-08 00:52:50.537517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-08 00:52:50.537529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-08 00:52:50.537540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-08 00:52:50.537545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-08 00:52:50.537550 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.537554 | orchestrator | 2026-04-08 00:52:50.537559 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-08 00:52:50.537566 | orchestrator | Wednesday 08 April 2026 00:49:11 +0000 (0:00:01.265) 0:02:44.076 ******* 2026-04-08 00:52:50.537572 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.537578 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.537587 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.537597 | orchestrator | 2026-04-08 00:52:50.537603 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-08 00:52:50.537609 | orchestrator | Wednesday 08 April 2026 00:49:13 +0000 (0:00:01.681) 0:02:45.757 ******* 2026-04-08 00:52:50.537615 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.537620 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.537627 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.537632 | orchestrator | 2026-04-08 00:52:50.537663 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-08 00:52:50.537670 | orchestrator | Wednesday 08 April 2026 00:49:15 +0000 (0:00:02.238) 0:02:47.996 ******* 2026-04-08 00:52:50.537676 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.537683 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.537689 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.537697 | orchestrator | 2026-04-08 00:52:50.537723 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-08 00:52:50.537728 | orchestrator | Wednesday 08 April 2026 00:49:15 +0000 (0:00:00.325) 0:02:48.321 ******* 2026-04-08 00:52:50.537733 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.537737 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.537742 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.537746 | orchestrator | 2026-04-08 00:52:50.537751 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-08 00:52:50.537756 | orchestrator | Wednesday 08 April 2026 00:49:15 +0000 (0:00:00.293) 0:02:48.615 ******* 2026-04-08 00:52:50.537760 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.537765 | orchestrator | 2026-04-08 00:52:50.537770 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-08 00:52:50.537774 | orchestrator | Wednesday 08 April 2026 00:49:17 +0000 (0:00:01.178) 0:02:49.793 ******* 2026-04-08 00:52:50.537781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:52:50.537794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:52:50.537805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:52:50.537817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:52:50.537822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:52:50.537827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:52:50.537839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:52:50.537847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:52:50.537857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:52:50.537862 | orchestrator | 2026-04-08 00:52:50.537867 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-08 00:52:50.537872 | orchestrator | Wednesday 08 April 2026 00:49:20 +0000 (0:00:03.461) 0:02:53.255 ******* 2026-04-08 00:52:50.537877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-08 00:52:50.537882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:52:50.537891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:52:50.537895 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.537903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-08 00:52:50.537911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:52:50.537915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:52:50.537919 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.537923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-08 00:52:50.537931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:52:50.537935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:52:50.537940 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.537944 | orchestrator | 2026-04-08 00:52:50.537947 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-08 00:52:50.537951 | orchestrator | Wednesday 08 April 2026 00:49:21 +0000 (0:00:00.636) 0:02:53.891 ******* 2026-04-08 00:52:50.537955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-08 00:52:50.537964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-08 00:52:50.537968 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.537975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-08 00:52:50.537979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-08 00:52:50.537983 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.537987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-08 00:52:50.537991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-08 00:52:50.537995 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.537999 | orchestrator | 2026-04-08 00:52:50.538010 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-08 00:52:50.538054 | orchestrator | Wednesday 08 April 2026 00:49:22 +0000 (0:00:01.010) 0:02:54.901 ******* 2026-04-08 00:52:50.538059 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.538063 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.538067 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.538071 | orchestrator | 2026-04-08 00:52:50.538074 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-08 00:52:50.538078 | orchestrator | Wednesday 08 April 2026 00:49:23 +0000 (0:00:01.327) 0:02:56.229 ******* 2026-04-08 00:52:50.538082 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.538086 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.538090 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.538094 | orchestrator | 2026-04-08 00:52:50.538097 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-08 00:52:50.538101 | orchestrator | Wednesday 08 April 2026 00:49:25 +0000 (0:00:02.100) 0:02:58.329 ******* 2026-04-08 00:52:50.538105 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.538109 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.538113 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.538117 | orchestrator | 2026-04-08 00:52:50.538120 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-08 00:52:50.538124 | orchestrator | Wednesday 08 April 2026 00:49:25 +0000 (0:00:00.304) 0:02:58.633 ******* 2026-04-08 00:52:50.538158 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.538166 | orchestrator | 2026-04-08 00:52:50.538173 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-08 00:52:50.538181 | orchestrator | Wednesday 08 April 2026 00:49:27 +0000 (0:00:01.132) 0:02:59.766 ******* 2026-04-08 00:52:50.538189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 00:52:50.538204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 00:52:50.538245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 00:52:50.538249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538257 | orchestrator | 2026-04-08 00:52:50.538261 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-08 00:52:50.538265 | orchestrator | Wednesday 08 April 2026 00:49:30 +0000 (0:00:03.132) 0:03:02.899 ******* 2026-04-08 00:52:50.538273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-08 00:52:50.538286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538290 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.538295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-08 00:52:50.538299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538303 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.538307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-08 00:52:50.538314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538322 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.538326 | orchestrator | 2026-04-08 00:52:50.538333 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-08 00:52:50.538337 | orchestrator | Wednesday 08 April 2026 00:49:31 +0000 (0:00:00.775) 0:03:03.674 ******* 2026-04-08 00:52:50.538342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-08 00:52:50.538347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-08 00:52:50.538352 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.538356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-08 00:52:50.538360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-08 00:52:50.538364 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.538368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-08 00:52:50.538372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-08 00:52:50.538376 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.538380 | orchestrator | 2026-04-08 00:52:50.538384 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-08 00:52:50.538387 | orchestrator | Wednesday 08 April 2026 00:49:32 +0000 (0:00:01.509) 0:03:05.184 ******* 2026-04-08 00:52:50.538391 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.538395 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.538399 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.538403 | orchestrator | 2026-04-08 00:52:50.538407 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-08 00:52:50.538411 | orchestrator | Wednesday 08 April 2026 00:49:33 +0000 (0:00:01.428) 0:03:06.613 ******* 2026-04-08 00:52:50.538415 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.538419 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.538423 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.538427 | orchestrator | 2026-04-08 00:52:50.538430 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-08 00:52:50.538434 | orchestrator | Wednesday 08 April 2026 00:49:36 +0000 (0:00:02.293) 0:03:08.906 ******* 2026-04-08 00:52:50.538438 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.538442 | orchestrator | 2026-04-08 00:52:50.538446 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-08 00:52:50.538449 | orchestrator | Wednesday 08 April 2026 00:49:37 +0000 (0:00:01.123) 0:03:10.030 ******* 2026-04-08 00:52:50.538454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-08 00:52:50.538471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-08 00:52:50.538487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-08 00:52:50.538524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538549 | orchestrator | 2026-04-08 00:52:50.538555 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-08 00:52:50.538561 | orchestrator | Wednesday 08 April 2026 00:49:41 +0000 (0:00:03.935) 0:03:13.965 ******* 2026-04-08 00:52:50.538568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-08 00:52:50.538586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538613 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.538620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-08 00:52:50.538628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538666 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.538679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-08 00:52:50.538686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.538714 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.538720 | orchestrator | 2026-04-08 00:52:50.538727 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-08 00:52:50.538733 | orchestrator | Wednesday 08 April 2026 00:49:41 +0000 (0:00:00.684) 0:03:14.650 ******* 2026-04-08 00:52:50.538740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-08 00:52:50.538747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-08 00:52:50.538754 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.538761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-08 00:52:50.538768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-08 00:52:50.538774 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.538780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-08 00:52:50.538791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-08 00:52:50.538800 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.538809 | orchestrator | 2026-04-08 00:52:50.538816 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-08 00:52:50.538822 | orchestrator | Wednesday 08 April 2026 00:49:42 +0000 (0:00:00.888) 0:03:15.538 ******* 2026-04-08 00:52:50.538828 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.538841 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.538848 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.538855 | orchestrator | 2026-04-08 00:52:50.538861 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-08 00:52:50.538868 | orchestrator | Wednesday 08 April 2026 00:49:44 +0000 (0:00:01.336) 0:03:16.874 ******* 2026-04-08 00:52:50.538875 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.538881 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.538888 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.538894 | orchestrator | 2026-04-08 00:52:50.538901 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-08 00:52:50.538907 | orchestrator | Wednesday 08 April 2026 00:49:46 +0000 (0:00:02.271) 0:03:19.146 ******* 2026-04-08 00:52:50.538914 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.538920 | orchestrator | 2026-04-08 00:52:50.538926 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-08 00:52:50.538933 | orchestrator | Wednesday 08 April 2026 00:49:47 +0000 (0:00:01.342) 0:03:20.488 ******* 2026-04-08 00:52:50.538938 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-08 00:52:50.538942 | orchestrator | 2026-04-08 00:52:50.538946 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-08 00:52:50.538950 | orchestrator | Wednesday 08 April 2026 00:49:51 +0000 (0:00:03.225) 0:03:23.714 ******* 2026-04-08 00:52:50.538956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:52:50.538968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-08 00:52:50.538975 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.538994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:52:50.539011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-08 00:52:50.539018 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.539025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:52:50.539039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-08 00:52:50.539044 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.539048 | orchestrator | 2026-04-08 00:52:50.539052 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-08 00:52:50.539056 | orchestrator | Wednesday 08 April 2026 00:49:53 +0000 (0:00:02.345) 0:03:26.060 ******* 2026-04-08 00:52:50.539060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:52:50.539069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-08 00:52:50.539073 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.539083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:52:50.539088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-08 00:52:50.539096 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.539100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:52:50.539104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-08 00:52:50.539108 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.539112 | orchestrator | 2026-04-08 00:52:50.539121 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-08 00:52:50.539125 | orchestrator | Wednesday 08 April 2026 00:49:55 +0000 (0:00:02.512) 0:03:28.573 ******* 2026-04-08 00:52:50.539168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-08 00:52:50.539173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-08 00:52:50.539181 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.539185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-08 00:52:50.539189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-08 00:52:50.539193 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.539197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-08 00:52:50.539201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-08 00:52:50.539205 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.539209 | orchestrator | 2026-04-08 00:52:50.539213 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-08 00:52:50.539217 | orchestrator | Wednesday 08 April 2026 00:49:58 +0000 (0:00:02.318) 0:03:30.892 ******* 2026-04-08 00:52:50.539221 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.539225 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.539229 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.539233 | orchestrator | 2026-04-08 00:52:50.539240 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-08 00:52:50.539244 | orchestrator | Wednesday 08 April 2026 00:50:00 +0000 (0:00:02.148) 0:03:33.041 ******* 2026-04-08 00:52:50.539248 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.539252 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.539260 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.539264 | orchestrator | 2026-04-08 00:52:50.539268 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-08 00:52:50.539271 | orchestrator | Wednesday 08 April 2026 00:50:01 +0000 (0:00:01.614) 0:03:34.655 ******* 2026-04-08 00:52:50.539288 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.539293 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.539297 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.539301 | orchestrator | 2026-04-08 00:52:50.539304 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-08 00:52:50.539308 | orchestrator | Wednesday 08 April 2026 00:50:02 +0000 (0:00:00.315) 0:03:34.971 ******* 2026-04-08 00:52:50.539312 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.539316 | orchestrator | 2026-04-08 00:52:50.539320 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-08 00:52:50.539323 | orchestrator | Wednesday 08 April 2026 00:50:03 +0000 (0:00:01.271) 0:03:36.242 ******* 2026-04-08 00:52:50.539328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-08 00:52:50.539333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-08 00:52:50.539337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-08 00:52:50.539341 | orchestrator | 2026-04-08 00:52:50.539345 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-08 00:52:50.539349 | orchestrator | Wednesday 08 April 2026 00:50:05 +0000 (0:00:01.514) 0:03:37.757 ******* 2026-04-08 00:52:50.539356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-08 00:52:50.539364 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.539373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-08 00:52:50.539377 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.539381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-08 00:52:50.539385 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.539389 | orchestrator | 2026-04-08 00:52:50.539393 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-08 00:52:50.539397 | orchestrator | Wednesday 08 April 2026 00:50:05 +0000 (0:00:00.343) 0:03:38.101 ******* 2026-04-08 00:52:50.539401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-08 00:52:50.539405 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.539409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-08 00:52:50.539413 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.539417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-08 00:52:50.539420 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.539425 | orchestrator | 2026-04-08 00:52:50.539429 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-08 00:52:50.539433 | orchestrator | Wednesday 08 April 2026 00:50:06 +0000 (0:00:00.810) 0:03:38.911 ******* 2026-04-08 00:52:50.539436 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.539440 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.539444 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.539448 | orchestrator | 2026-04-08 00:52:50.539454 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-08 00:52:50.539467 | orchestrator | Wednesday 08 April 2026 00:50:06 +0000 (0:00:00.369) 0:03:39.280 ******* 2026-04-08 00:52:50.539474 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.539482 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.539488 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.539495 | orchestrator | 2026-04-08 00:52:50.539501 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-08 00:52:50.539507 | orchestrator | Wednesday 08 April 2026 00:50:07 +0000 (0:00:01.284) 0:03:40.564 ******* 2026-04-08 00:52:50.539512 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.539518 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.539524 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.539530 | orchestrator | 2026-04-08 00:52:50.539536 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-08 00:52:50.539543 | orchestrator | Wednesday 08 April 2026 00:50:08 +0000 (0:00:00.303) 0:03:40.868 ******* 2026-04-08 00:52:50.539549 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.539555 | orchestrator | 2026-04-08 00:52:50.539561 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-08 00:52:50.539567 | orchestrator | Wednesday 08 April 2026 00:50:09 +0000 (0:00:01.435) 0:03:42.303 ******* 2026-04-08 00:52:50.539584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 00:52:50.539591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-08 00:52:50.539630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 00:52:50.539649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.539657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.539677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:52:50.539712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-08 00:52:50.539723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-08 00:52:50.539744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.539748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.539752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.539766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-08 00:52:50.539771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:52:50.539788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:52:50.539792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 00:52:50.539800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-08 00:52:50.539809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.539851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-08 00:52:50.539886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-08 00:52:50.539897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:52:50.539904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.539923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.539929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:52:50.539951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-08 00:52:50.539966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.539979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.539984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-08 00:52:50.539990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:52:50.539996 | orchestrator | 2026-04-08 00:52:50.540002 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-08 00:52:50.540007 | orchestrator | Wednesday 08 April 2026 00:50:14 +0000 (0:00:04.397) 0:03:46.701 ******* 2026-04-08 00:52:50.540023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 00:52:50.540030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 00:52:50.540056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-08 00:52:50.540081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.540104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-08 00:52:50.540109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.540117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.540146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:52:50.540150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.540161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 00:52:50.540172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-08 00:52:50.540185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:52:50.540201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.540209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-08 00:52:50.540235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u op2026-04-08 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:50.540360 | orchestrator | enstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-08 00:52:50.540371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.540375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-08 00:52:50.540380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:52:50.540384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540388 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.540396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-08 00:52:50.540415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.540419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:52:50.540423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.540427 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.540431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:52:50.540448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-08 00:52:50.540464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-08 00:52:50.540473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.540480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-08 00:52:50.540487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:52:50.540500 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.540506 | orchestrator | 2026-04-08 00:52:50.540514 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-08 00:52:50.540525 | orchestrator | Wednesday 08 April 2026 00:50:16 +0000 (0:00:02.072) 0:03:48.773 ******* 2026-04-08 00:52:50.540533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-08 00:52:50.540541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-08 00:52:50.540548 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.540559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-08 00:52:50.540565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-08 00:52:50.540571 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.540578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-08 00:52:50.540584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-08 00:52:50.540590 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.540596 | orchestrator | 2026-04-08 00:52:50.540602 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-08 00:52:50.540609 | orchestrator | Wednesday 08 April 2026 00:50:17 +0000 (0:00:01.492) 0:03:50.265 ******* 2026-04-08 00:52:50.540615 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.540621 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.540628 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.540635 | orchestrator | 2026-04-08 00:52:50.540641 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-08 00:52:50.540648 | orchestrator | Wednesday 08 April 2026 00:50:18 +0000 (0:00:01.308) 0:03:51.574 ******* 2026-04-08 00:52:50.540655 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.540660 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.540664 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.540668 | orchestrator | 2026-04-08 00:52:50.540673 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-08 00:52:50.540679 | orchestrator | Wednesday 08 April 2026 00:50:21 +0000 (0:00:02.288) 0:03:53.862 ******* 2026-04-08 00:52:50.540688 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.540695 | orchestrator | 2026-04-08 00:52:50.540700 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-08 00:52:50.540706 | orchestrator | Wednesday 08 April 2026 00:50:22 +0000 (0:00:01.478) 0:03:55.341 ******* 2026-04-08 00:52:50.540712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.540731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.540744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.540751 | orchestrator | 2026-04-08 00:52:50.540757 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-08 00:52:50.540763 | orchestrator | Wednesday 08 April 2026 00:50:25 +0000 (0:00:03.280) 0:03:58.622 ******* 2026-04-08 00:52:50.540769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.540774 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.540781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.540795 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.540802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.540807 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.540810 | orchestrator | 2026-04-08 00:52:50.540815 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-08 00:52:50.540818 | orchestrator | Wednesday 08 April 2026 00:50:26 +0000 (0:00:00.515) 0:03:59.138 ******* 2026-04-08 00:52:50.540827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-08 00:52:50.540832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-08 00:52:50.540836 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.540843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-08 00:52:50.540880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-08 00:52:50.540885 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.540889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-08 00:52:50.540893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-08 00:52:50.540897 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.540901 | orchestrator | 2026-04-08 00:52:50.540905 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-08 00:52:50.540908 | orchestrator | Wednesday 08 April 2026 00:50:27 +0000 (0:00:01.380) 0:04:00.518 ******* 2026-04-08 00:52:50.540912 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.540916 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.540920 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.540924 | orchestrator | 2026-04-08 00:52:50.540928 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-08 00:52:50.540932 | orchestrator | Wednesday 08 April 2026 00:50:29 +0000 (0:00:01.463) 0:04:01.982 ******* 2026-04-08 00:52:50.540943 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.540946 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.540950 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.540954 | orchestrator | 2026-04-08 00:52:50.540958 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-08 00:52:50.540962 | orchestrator | Wednesday 08 April 2026 00:50:31 +0000 (0:00:02.159) 0:04:04.141 ******* 2026-04-08 00:52:50.540965 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.540970 | orchestrator | 2026-04-08 00:52:50.540974 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-08 00:52:50.540978 | orchestrator | Wednesday 08 April 2026 00:50:33 +0000 (0:00:01.553) 0:04:05.694 ******* 2026-04-08 00:52:50.540984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.540993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.541001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.541007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.541016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.541021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.541029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.541037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.541042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.541049 | orchestrator | 2026-04-08 00:52:50.541054 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-08 00:52:50.541058 | orchestrator | Wednesday 08 April 2026 00:50:36 +0000 (0:00:03.924) 0:04:09.619 ******* 2026-04-08 00:52:50.541064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.541068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.541079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.541084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.541093 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.541097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.541102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.541106 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.541111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.541119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.541126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.541154 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.541162 | orchestrator | 2026-04-08 00:52:50.541167 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-08 00:52:50.541171 | orchestrator | Wednesday 08 April 2026 00:50:37 +0000 (0:00:00.687) 0:04:10.306 ******* 2026-04-08 00:52:50.541176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-08 00:52:50.541181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-08 00:52:50.541186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-08 00:52:50.541191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-08 00:52:50.541195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-08 00:52:50.541200 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.541204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-08 00:52:50.541209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-08 00:52:50.541214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-08 00:52:50.541218 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.541223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-08 00:52:50.541227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-08 00:52:50.541232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-08 00:52:50.541236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-08 00:52:50.541241 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.541245 | orchestrator | 2026-04-08 00:52:50.541250 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-08 00:52:50.541254 | orchestrator | Wednesday 08 April 2026 00:50:38 +0000 (0:00:00.819) 0:04:11.126 ******* 2026-04-08 00:52:50.541259 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.541263 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.541268 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.541272 | orchestrator | 2026-04-08 00:52:50.541276 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-08 00:52:50.541284 | orchestrator | Wednesday 08 April 2026 00:50:40 +0000 (0:00:01.741) 0:04:12.867 ******* 2026-04-08 00:52:50.541292 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.541297 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.541301 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.541306 | orchestrator | 2026-04-08 00:52:50.541310 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-08 00:52:50.541315 | orchestrator | Wednesday 08 April 2026 00:50:42 +0000 (0:00:02.081) 0:04:14.949 ******* 2026-04-08 00:52:50.541320 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.541324 | orchestrator | 2026-04-08 00:52:50.541329 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-08 00:52:50.541336 | orchestrator | Wednesday 08 April 2026 00:50:43 +0000 (0:00:01.268) 0:04:16.217 ******* 2026-04-08 00:52:50.541341 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-08 00:52:50.541346 | orchestrator | 2026-04-08 00:52:50.541350 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-08 00:52:50.541354 | orchestrator | Wednesday 08 April 2026 00:50:44 +0000 (0:00:01.404) 0:04:17.622 ******* 2026-04-08 00:52:50.541358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-08 00:52:50.541363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-08 00:52:50.541367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-08 00:52:50.541371 | orchestrator | 2026-04-08 00:52:50.541375 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-08 00:52:50.541379 | orchestrator | Wednesday 08 April 2026 00:50:48 +0000 (0:00:03.907) 0:04:21.529 ******* 2026-04-08 00:52:50.541383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:52:50.541387 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.541391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:52:50.541400 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.541407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:52:50.541411 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.541415 | orchestrator | 2026-04-08 00:52:50.541418 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-08 00:52:50.541422 | orchestrator | Wednesday 08 April 2026 00:50:50 +0000 (0:00:01.322) 0:04:22.852 ******* 2026-04-08 00:52:50.541429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-08 00:52:50.541433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-08 00:52:50.541437 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.541441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-08 00:52:50.541445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-08 00:52:50.541449 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.541453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-08 00:52:50.541457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-08 00:52:50.541461 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.541464 | orchestrator | 2026-04-08 00:52:50.541468 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-08 00:52:50.541472 | orchestrator | Wednesday 08 April 2026 00:50:52 +0000 (0:00:01.909) 0:04:24.761 ******* 2026-04-08 00:52:50.541476 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.541480 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.541484 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.541490 | orchestrator | 2026-04-08 00:52:50.541497 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-08 00:52:50.541503 | orchestrator | Wednesday 08 April 2026 00:50:54 +0000 (0:00:02.362) 0:04:27.124 ******* 2026-04-08 00:52:50.541509 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.541515 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.541521 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.541526 | orchestrator | 2026-04-08 00:52:50.541532 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-08 00:52:50.541538 | orchestrator | Wednesday 08 April 2026 00:50:57 +0000 (0:00:03.074) 0:04:30.198 ******* 2026-04-08 00:52:50.541548 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-08 00:52:50.541554 | orchestrator | 2026-04-08 00:52:50.541560 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-08 00:52:50.541566 | orchestrator | Wednesday 08 April 2026 00:50:58 +0000 (0:00:00.817) 0:04:31.016 ******* 2026-04-08 00:52:50.541571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:52:50.541577 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.541583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:52:50.541593 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.541602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:52:50.541608 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.541613 | orchestrator | 2026-04-08 00:52:50.541620 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-08 00:52:50.541626 | orchestrator | Wednesday 08 April 2026 00:50:59 +0000 (0:00:01.354) 0:04:32.371 ******* 2026-04-08 00:52:50.541631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:52:50.541637 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.541643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:52:50.541648 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.541655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:52:50.541686 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.541694 | orchestrator | 2026-04-08 00:52:50.541700 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-08 00:52:50.541706 | orchestrator | Wednesday 08 April 2026 00:51:01 +0000 (0:00:01.353) 0:04:33.724 ******* 2026-04-08 00:52:50.541713 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.541717 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.541721 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.541725 | orchestrator | 2026-04-08 00:52:50.541729 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-08 00:52:50.541732 | orchestrator | Wednesday 08 April 2026 00:51:02 +0000 (0:00:01.101) 0:04:34.825 ******* 2026-04-08 00:52:50.541736 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.541741 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.541744 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.541750 | orchestrator | 2026-04-08 00:52:50.541756 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-08 00:52:50.541761 | orchestrator | Wednesday 08 April 2026 00:51:04 +0000 (0:00:02.209) 0:04:37.035 ******* 2026-04-08 00:52:50.541767 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.541773 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.541779 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.541786 | orchestrator | 2026-04-08 00:52:50.541792 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-08 00:52:50.541799 | orchestrator | Wednesday 08 April 2026 00:51:07 +0000 (0:00:02.709) 0:04:39.744 ******* 2026-04-08 00:52:50.541815 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-08 00:52:50.541830 | orchestrator | 2026-04-08 00:52:50.541836 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-08 00:52:50.541842 | orchestrator | Wednesday 08 April 2026 00:51:07 +0000 (0:00:00.807) 0:04:40.552 ******* 2026-04-08 00:52:50.541855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-08 00:52:50.541861 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.541873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-08 00:52:50.541880 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.541887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-08 00:52:50.541897 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.541901 | orchestrator | 2026-04-08 00:52:50.541905 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-08 00:52:50.541909 | orchestrator | Wednesday 08 April 2026 00:51:09 +0000 (0:00:01.168) 0:04:41.721 ******* 2026-04-08 00:52:50.541913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-08 00:52:50.541917 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.541921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-08 00:52:50.541925 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.541929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-08 00:52:50.541933 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.541937 | orchestrator | 2026-04-08 00:52:50.541940 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-08 00:52:50.541944 | orchestrator | Wednesday 08 April 2026 00:51:10 +0000 (0:00:01.110) 0:04:42.831 ******* 2026-04-08 00:52:50.541949 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.541952 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.541956 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.541962 | orchestrator | 2026-04-08 00:52:50.541968 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-08 00:52:50.541974 | orchestrator | Wednesday 08 April 2026 00:51:11 +0000 (0:00:01.319) 0:04:44.151 ******* 2026-04-08 00:52:50.541981 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.541986 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.541992 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.541997 | orchestrator | 2026-04-08 00:52:50.542003 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-08 00:52:50.542009 | orchestrator | Wednesday 08 April 2026 00:51:13 +0000 (0:00:02.452) 0:04:46.603 ******* 2026-04-08 00:52:50.542060 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.542067 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.542078 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.542086 | orchestrator | 2026-04-08 00:52:50.542090 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-08 00:52:50.542094 | orchestrator | Wednesday 08 April 2026 00:51:16 +0000 (0:00:02.871) 0:04:49.474 ******* 2026-04-08 00:52:50.542098 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.542107 | orchestrator | 2026-04-08 00:52:50.542111 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-08 00:52:50.542114 | orchestrator | Wednesday 08 April 2026 00:51:18 +0000 (0:00:01.202) 0:04:50.677 ******* 2026-04-08 00:52:50.542124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.542220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 00:52:50.542243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.542250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.542257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.542270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.542292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 00:52:50.542299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.542305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.542312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 00:52:50.542319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.542330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.542347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.542355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.542361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.542365 | orchestrator | 2026-04-08 00:52:50.542369 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-08 00:52:50.542373 | orchestrator | Wednesday 08 April 2026 00:51:21 +0000 (0:00:03.198) 0:04:53.876 ******* 2026-04-08 00:52:50.542378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.542382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 00:52:50.542389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.542410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.542415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.542418 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.542423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.542427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 00:52:50.542431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.542435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.542446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.542453 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.542457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.542462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 00:52:50.542466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.542469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 00:52:50.542473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:52:50.542481 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.542485 | orchestrator | 2026-04-08 00:52:50.542489 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-08 00:52:50.542493 | orchestrator | Wednesday 08 April 2026 00:51:22 +0000 (0:00:01.010) 0:04:54.886 ******* 2026-04-08 00:52:50.542499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-08 00:52:50.542505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-08 00:52:50.542510 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.542516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-08 00:52:50.542520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-08 00:52:50.542524 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.542527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-08 00:52:50.542531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-08 00:52:50.542535 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.542539 | orchestrator | 2026-04-08 00:52:50.542542 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-08 00:52:50.542546 | orchestrator | Wednesday 08 April 2026 00:51:23 +0000 (0:00:00.826) 0:04:55.713 ******* 2026-04-08 00:52:50.542550 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.542554 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.542558 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.542562 | orchestrator | 2026-04-08 00:52:50.542565 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-08 00:52:50.542569 | orchestrator | Wednesday 08 April 2026 00:51:24 +0000 (0:00:01.410) 0:04:57.123 ******* 2026-04-08 00:52:50.542573 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.542577 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.542580 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.542584 | orchestrator | 2026-04-08 00:52:50.542588 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-08 00:52:50.542592 | orchestrator | Wednesday 08 April 2026 00:51:26 +0000 (0:00:02.083) 0:04:59.206 ******* 2026-04-08 00:52:50.542595 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.542599 | orchestrator | 2026-04-08 00:52:50.542603 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-08 00:52:50.542607 | orchestrator | Wednesday 08 April 2026 00:51:28 +0000 (0:00:01.471) 0:05:00.677 ******* 2026-04-08 00:52:50.542617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-08 00:52:50.542623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-08 00:52:50.542684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-08 00:52:50.542696 | orchestrator | [0;33mchanged: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-08 00:52:50.542701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-08 00:52:50.542710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-08 00:52:50.542714 | orchestrator | 2026-04-08 00:52:50.542718 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-08 00:52:50.542722 | orchestrator | Wednesday 08 April 2026 00:51:33 +0000 (0:00:05.329) 0:05:06.006 ******* 2026-04-08 00:52:50.542732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-08 00:52:50.542736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-08 00:52:50.542741 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.542745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-08 00:52:50.542753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-08 00:52:50.542757 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.542767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-08 00:52:50.542771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-08 00:52:50.542776 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.542780 | orchestrator | 2026-04-08 00:52:50.542783 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-08 00:52:50.542791 | orchestrator | Wednesday 08 April 2026 00:51:34 +0000 (0:00:01.014) 0:05:07.021 ******* 2026-04-08 00:52:50.542795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-08 00:52:50.542799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-08 00:52:50.542804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-08 00:52:50.542808 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.542812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-08 00:52:50.542816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-08 00:52:50.542820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-08 00:52:50.542882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-08 00:52:50.542891 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.542918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-08 00:52:50.542923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-08 00:52:50.542927 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.542931 | orchestrator | 2026-04-08 00:52:50.542938 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-08 00:52:50.542942 | orchestrator | Wednesday 08 April 2026 00:51:35 +0000 (0:00:01.362) 0:05:08.383 ******* 2026-04-08 00:52:50.542946 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.542950 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.542955 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.542958 | orchestrator | 2026-04-08 00:52:50.542962 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-08 00:52:50.542966 | orchestrator | Wednesday 08 April 2026 00:51:36 +0000 (0:00:00.496) 0:05:08.879 ******* 2026-04-08 00:52:50.542970 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.542974 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.542978 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.542981 | orchestrator | 2026-04-08 00:52:50.542989 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-08 00:52:50.542993 | orchestrator | Wednesday 08 April 2026 00:51:37 +0000 (0:00:01.388) 0:05:10.268 ******* 2026-04-08 00:52:50.542996 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.543000 | orchestrator | 2026-04-08 00:52:50.543004 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-08 00:52:50.543008 | orchestrator | Wednesday 08 April 2026 00:51:39 +0000 (0:00:01.661) 0:05:11.930 ******* 2026-04-08 00:52:50.543017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-08 00:52:50.543022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:52:50.543026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:52:50.543042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-08 00:52:50.543054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:52:50.543064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-08 00:52:50.543091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:52:50.543097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:52:50.543109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:52:50.543156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-08 00:52:50.543163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-08 00:52:50.543170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-08 00:52:50.543186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-08 00:52:50.543201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:52:50.543236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:52:50.543251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-08 00:52:50.543263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-08 00:52:50.543269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:52:50.543290 | orchestrator | 2026-04-08 00:52:50.543296 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-08 00:52:50.543302 | orchestrator | Wednesday 08 April 2026 00:51:43 +0000 (0:00:04.341) 0:05:16.272 ******* 2026-04-08 00:52:50.543306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-08 00:52:50.543315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:52:50.543323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:52:50.543336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-08 00:52:50.543340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-08 00:52:50.543350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:52:50.543366 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.543370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-08 00:52:50.543374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:52:50.543378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:52:50.543400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-08 00:52:50.543404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-08 00:52:50.543409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:52:50.543413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-08 00:52:50.543417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:52:50.543449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:52:50.543453 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.543457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-08 00:52:50.543461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-08 00:52:50.543471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:52:50.543482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:52:50.543486 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.543490 | orchestrator | 2026-04-08 00:52:50.543494 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-08 00:52:50.543497 | orchestrator | Wednesday 08 April 2026 00:51:44 +0000 (0:00:00.892) 0:05:17.164 ******* 2026-04-08 00:52:50.543501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-08 00:52:50.543506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-08 00:52:50.543510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-08 00:52:50.543515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-08 00:52:50.543519 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.543523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-08 00:52:50.543530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-08 00:52:50.543534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-08 00:52:50.543538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-08 00:52:50.543542 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.543546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-08 00:52:50.543552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-08 00:52:50.543556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-08 00:52:50.543609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-08 00:52:50.543614 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.543618 | orchestrator | 2026-04-08 00:52:50.543622 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-08 00:52:50.543626 | orchestrator | Wednesday 08 April 2026 00:51:45 +0000 (0:00:01.382) 0:05:18.546 ******* 2026-04-08 00:52:50.543630 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.543634 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.543637 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.543641 | orchestrator | 2026-04-08 00:52:50.543645 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-08 00:52:50.543649 | orchestrator | Wednesday 08 April 2026 00:51:46 +0000 (0:00:00.481) 0:05:19.028 ******* 2026-04-08 00:52:50.543653 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.543657 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.543661 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.543664 | orchestrator | 2026-04-08 00:52:50.543668 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-08 00:52:50.543672 | orchestrator | Wednesday 08 April 2026 00:51:47 +0000 (0:00:01.125) 0:05:20.154 ******* 2026-04-08 00:52:50.543676 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.543679 | orchestrator | 2026-04-08 00:52:50.543683 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-08 00:52:50.543687 | orchestrator | Wednesday 08 April 2026 00:51:48 +0000 (0:00:01.373) 0:05:21.527 ******* 2026-04-08 00:52:50.543691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:52:50.543699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:52:50.543708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:52:50.543712 | orchestrator | 2026-04-08 00:52:50.543716 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-08 00:52:50.543722 | orchestrator | Wednesday 08 April 2026 00:51:51 +0000 (0:00:02.453) 0:05:23.980 ******* 2026-04-08 00:52:50.543726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-08 00:52:50.543730 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.543734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-08 00:52:50.543741 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.543745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-08 00:52:50.543749 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.543753 | orchestrator | 2026-04-08 00:52:50.543757 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-08 00:52:50.543760 | orchestrator | Wednesday 08 April 2026 00:51:51 +0000 (0:00:00.374) 0:05:24.355 ******* 2026-04-08 00:52:50.543765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-08 00:52:50.543769 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.543775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-08 00:52:50.543779 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.543783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-08 00:52:50.543787 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.543791 | orchestrator | 2026-04-08 00:52:50.543794 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-08 00:52:50.543798 | orchestrator | Wednesday 08 April 2026 00:51:52 +0000 (0:00:00.589) 0:05:24.944 ******* 2026-04-08 00:52:50.543802 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.543808 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.543812 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.543815 | orchestrator | 2026-04-08 00:52:50.543819 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-08 00:52:50.543823 | orchestrator | Wednesday 08 April 2026 00:51:53 +0000 (0:00:00.761) 0:05:25.706 ******* 2026-04-08 00:52:50.543827 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.543831 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.543834 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.543838 | orchestrator | 2026-04-08 00:52:50.543842 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-08 00:52:50.543849 | orchestrator | Wednesday 08 April 2026 00:51:54 +0000 (0:00:01.341) 0:05:27.047 ******* 2026-04-08 00:52:50.543853 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:50.543857 | orchestrator | 2026-04-08 00:52:50.543861 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-08 00:52:50.543864 | orchestrator | Wednesday 08 April 2026 00:51:55 +0000 (0:00:01.480) 0:05:28.527 ******* 2026-04-08 00:52:50.543869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.543874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.543879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.543888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.543897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.543902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-08 00:52:50.543905 | orchestrator | 2026-04-08 00:52:50.543909 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-08 00:52:50.543913 | orchestrator | Wednesday 08 April 2026 00:52:01 +0000 (0:00:06.019) 0:05:34.547 ******* 2026-04-08 00:52:50.543917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.543925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.543933 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.543937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.543941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.543945 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.543949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.543956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-08 00:52:50.543963 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.543967 | orchestrator | 2026-04-08 00:52:50.543971 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-08 00:52:50.543976 | orchestrator | Wednesday 08 April 2026 00:52:02 +0000 (0:00:00.833) 0:05:35.380 ******* 2026-04-08 00:52:50.543980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-08 00:52:50.543984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-08 00:52:50.543988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-08 00:52:50.543992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-08 00:52:50.543996 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.544000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-08 00:52:50.544004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-08 00:52:50.544008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-08 00:52:50.544012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-08 00:52:50.544016 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.544020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-08 00:52:50.544024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-08 00:52:50.544028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-08 00:52:50.544031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-08 00:52:50.544035 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.544039 | orchestrator | 2026-04-08 00:52:50.544043 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-08 00:52:50.544046 | orchestrator | Wednesday 08 April 2026 00:52:03 +0000 (0:00:00.894) 0:05:36.275 ******* 2026-04-08 00:52:50.544050 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.544054 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.544058 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.544065 | orchestrator | 2026-04-08 00:52:50.544068 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-08 00:52:50.544072 | orchestrator | Wednesday 08 April 2026 00:52:04 +0000 (0:00:01.282) 0:05:37.557 ******* 2026-04-08 00:52:50.544077 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.544083 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.544090 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.544095 | orchestrator | 2026-04-08 00:52:50.544106 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-08 00:52:50.544113 | orchestrator | Wednesday 08 April 2026 00:52:06 +0000 (0:00:02.012) 0:05:39.570 ******* 2026-04-08 00:52:50.544123 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.544198 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.544206 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.544213 | orchestrator | 2026-04-08 00:52:50.544219 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-08 00:52:50.544225 | orchestrator | Wednesday 08 April 2026 00:52:07 +0000 (0:00:00.491) 0:05:40.062 ******* 2026-04-08 00:52:50.544231 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.544237 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.544244 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.544250 | orchestrator | 2026-04-08 00:52:50.544256 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-08 00:52:50.544266 | orchestrator | Wednesday 08 April 2026 00:52:07 +0000 (0:00:00.308) 0:05:40.371 ******* 2026-04-08 00:52:50.544273 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.544279 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.544286 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.544293 | orchestrator | 2026-04-08 00:52:50.544297 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-08 00:52:50.544300 | orchestrator | Wednesday 08 April 2026 00:52:07 +0000 (0:00:00.289) 0:05:40.660 ******* 2026-04-08 00:52:50.544304 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.544308 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.544312 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.544315 | orchestrator | 2026-04-08 00:52:50.544319 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-08 00:52:50.544323 | orchestrator | Wednesday 08 April 2026 00:52:08 +0000 (0:00:00.249) 0:05:40.910 ******* 2026-04-08 00:52:50.544327 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.544330 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.544334 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.544338 | orchestrator | 2026-04-08 00:52:50.544342 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-08 00:52:50.544345 | orchestrator | Wednesday 08 April 2026 00:52:08 +0000 (0:00:00.463) 0:05:41.373 ******* 2026-04-08 00:52:50.544349 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.544353 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.544356 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.544360 | orchestrator | 2026-04-08 00:52:50.544364 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-08 00:52:50.544368 | orchestrator | Wednesday 08 April 2026 00:52:09 +0000 (0:00:00.553) 0:05:41.927 ******* 2026-04-08 00:52:50.544371 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.544375 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.544379 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.544383 | orchestrator | 2026-04-08 00:52:50.544387 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-08 00:52:50.544391 | orchestrator | Wednesday 08 April 2026 00:52:09 +0000 (0:00:00.665) 0:05:42.592 ******* 2026-04-08 00:52:50.544394 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.544398 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.544402 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.544406 | orchestrator | 2026-04-08 00:52:50.544409 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-08 00:52:50.544418 | orchestrator | Wednesday 08 April 2026 00:52:10 +0000 (0:00:00.688) 0:05:43.281 ******* 2026-04-08 00:52:50.544422 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.544425 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.544429 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.544434 | orchestrator | 2026-04-08 00:52:50.544440 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-08 00:52:50.544446 | orchestrator | Wednesday 08 April 2026 00:52:11 +0000 (0:00:00.927) 0:05:44.209 ******* 2026-04-08 00:52:50.544452 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.544458 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.544467 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.544475 | orchestrator | 2026-04-08 00:52:50.544480 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-08 00:52:50.544486 | orchestrator | Wednesday 08 April 2026 00:52:12 +0000 (0:00:01.020) 0:05:45.229 ******* 2026-04-08 00:52:50.544492 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.544498 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.544503 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.544510 | orchestrator | 2026-04-08 00:52:50.544516 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-08 00:52:50.544522 | orchestrator | Wednesday 08 April 2026 00:52:13 +0000 (0:00:00.903) 0:05:46.133 ******* 2026-04-08 00:52:50.544527 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.544534 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.544539 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.544543 | orchestrator | 2026-04-08 00:52:50.544547 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-08 00:52:50.544551 | orchestrator | Wednesday 08 April 2026 00:52:18 +0000 (0:00:04.848) 0:05:50.981 ******* 2026-04-08 00:52:50.544555 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.544558 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.544562 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.544566 | orchestrator | 2026-04-08 00:52:50.544569 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-08 00:52:50.544573 | orchestrator | Wednesday 08 April 2026 00:52:21 +0000 (0:00:03.143) 0:05:54.124 ******* 2026-04-08 00:52:50.544577 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.544581 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.544584 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.544588 | orchestrator | 2026-04-08 00:52:50.544592 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-08 00:52:50.544596 | orchestrator | Wednesday 08 April 2026 00:52:31 +0000 (0:00:09.722) 0:06:03.847 ******* 2026-04-08 00:52:50.544599 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.544603 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.544607 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.544611 | orchestrator | 2026-04-08 00:52:50.544615 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-08 00:52:50.544618 | orchestrator | Wednesday 08 April 2026 00:52:34 +0000 (0:00:03.798) 0:06:07.645 ******* 2026-04-08 00:52:50.544622 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:50.544630 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:50.544636 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:50.544641 | orchestrator | 2026-04-08 00:52:50.544647 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-08 00:52:50.544652 | orchestrator | Wednesday 08 April 2026 00:52:44 +0000 (0:00:09.669) 0:06:17.314 ******* 2026-04-08 00:52:50.544660 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.544668 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.544679 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.544685 | orchestrator | 2026-04-08 00:52:50.544691 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-08 00:52:50.544697 | orchestrator | Wednesday 08 April 2026 00:52:45 +0000 (0:00:00.534) 0:06:17.849 ******* 2026-04-08 00:52:50.544709 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.544719 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.544726 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.544732 | orchestrator | 2026-04-08 00:52:50.544737 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-08 00:52:50.544742 | orchestrator | Wednesday 08 April 2026 00:52:45 +0000 (0:00:00.310) 0:06:18.159 ******* 2026-04-08 00:52:50.544748 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.544755 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.544761 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.544766 | orchestrator | 2026-04-08 00:52:50.544771 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-08 00:52:50.544778 | orchestrator | Wednesday 08 April 2026 00:52:45 +0000 (0:00:00.304) 0:06:18.464 ******* 2026-04-08 00:52:50.544783 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.544790 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.544795 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.544801 | orchestrator | 2026-04-08 00:52:50.544807 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-08 00:52:50.544812 | orchestrator | Wednesday 08 April 2026 00:52:46 +0000 (0:00:00.297) 0:06:18.761 ******* 2026-04-08 00:52:50.544818 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.544824 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.544830 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.544836 | orchestrator | 2026-04-08 00:52:50.544842 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-08 00:52:50.544847 | orchestrator | Wednesday 08 April 2026 00:52:46 +0000 (0:00:00.527) 0:06:19.289 ******* 2026-04-08 00:52:50.544853 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:50.544859 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:50.544866 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:50.544871 | orchestrator | 2026-04-08 00:52:50.544876 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-08 00:52:50.544882 | orchestrator | Wednesday 08 April 2026 00:52:46 +0000 (0:00:00.312) 0:06:19.602 ******* 2026-04-08 00:52:50.544888 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.544893 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.544899 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.544904 | orchestrator | 2026-04-08 00:52:50.544910 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-08 00:52:50.544916 | orchestrator | Wednesday 08 April 2026 00:52:47 +0000 (0:00:00.856) 0:06:20.458 ******* 2026-04-08 00:52:50.544924 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:50.544929 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:50.544935 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:50.544941 | orchestrator | 2026-04-08 00:52:50.544946 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:52:50.544953 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-08 00:52:50.544959 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-08 00:52:50.544965 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-08 00:52:50.544971 | orchestrator | 2026-04-08 00:52:50.544978 | orchestrator | 2026-04-08 00:52:50.544984 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:52:50.544990 | orchestrator | Wednesday 08 April 2026 00:52:48 +0000 (0:00:00.777) 0:06:21.236 ******* 2026-04-08 00:52:50.544996 | orchestrator | =============================================================================== 2026-04-08 00:52:50.545002 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.72s 2026-04-08 00:52:50.545013 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.67s 2026-04-08 00:52:50.545017 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.02s 2026-04-08 00:52:50.545021 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 5.51s 2026-04-08 00:52:50.545024 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.33s 2026-04-08 00:52:50.545028 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.28s 2026-04-08 00:52:50.545032 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.18s 2026-04-08 00:52:50.545035 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.85s 2026-04-08 00:52:50.545039 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.48s 2026-04-08 00:52:50.545043 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.46s 2026-04-08 00:52:50.545046 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.41s 2026-04-08 00:52:50.545050 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.40s 2026-04-08 00:52:50.545054 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.38s 2026-04-08 00:52:50.545063 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.34s 2026-04-08 00:52:50.545067 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.10s 2026-04-08 00:52:50.545070 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.01s 2026-04-08 00:52:50.545074 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.94s 2026-04-08 00:52:50.545078 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.92s 2026-04-08 00:52:50.545082 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.91s 2026-04-08 00:52:50.545085 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.82s 2026-04-08 00:52:53.566893 | orchestrator | 2026-04-08 00:52:53 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:53.568306 | orchestrator | 2026-04-08 00:52:53 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:52:53.569667 | orchestrator | 2026-04-08 00:52:53 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:52:53.572035 | orchestrator | 2026-04-08 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:56.595409 | orchestrator | 2026-04-08 00:52:56 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:56.596132 | orchestrator | 2026-04-08 00:52:56 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:52:56.597616 | orchestrator | 2026-04-08 00:52:56 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:52:56.597699 | orchestrator | 2026-04-08 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:59.619196 | orchestrator | 2026-04-08 00:52:59 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:52:59.619368 | orchestrator | 2026-04-08 00:52:59 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:52:59.620276 | orchestrator | 2026-04-08 00:52:59 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:52:59.620360 | orchestrator | 2026-04-08 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:02.652999 | orchestrator | 2026-04-08 00:53:02 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:02.653269 | orchestrator | 2026-04-08 00:53:02 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:02.654457 | orchestrator | 2026-04-08 00:53:02 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:02.654535 | orchestrator | 2026-04-08 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:05.704448 | orchestrator | 2026-04-08 00:53:05 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:05.707226 | orchestrator | 2026-04-08 00:53:05 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:05.709648 | orchestrator | 2026-04-08 00:53:05 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:05.710085 | orchestrator | 2026-04-08 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:08.756493 | orchestrator | 2026-04-08 00:53:08 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:08.756878 | orchestrator | 2026-04-08 00:53:08 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:08.757770 | orchestrator | 2026-04-08 00:53:08 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:08.757809 | orchestrator | 2026-04-08 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:11.804440 | orchestrator | 2026-04-08 00:53:11 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:11.804758 | orchestrator | 2026-04-08 00:53:11 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:11.808700 | orchestrator | 2026-04-08 00:53:11 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:11.808848 | orchestrator | 2026-04-08 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:14.842528 | orchestrator | 2026-04-08 00:53:14 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:14.843140 | orchestrator | 2026-04-08 00:53:14 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:14.844005 | orchestrator | 2026-04-08 00:53:14 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:14.844052 | orchestrator | 2026-04-08 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:17.877482 | orchestrator | 2026-04-08 00:53:17 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:17.877675 | orchestrator | 2026-04-08 00:53:17 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:17.878445 | orchestrator | 2026-04-08 00:53:17 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:17.878473 | orchestrator | 2026-04-08 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:20.931260 | orchestrator | 2026-04-08 00:53:20 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:20.931334 | orchestrator | 2026-04-08 00:53:20 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:20.931341 | orchestrator | 2026-04-08 00:53:20 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:20.931345 | orchestrator | 2026-04-08 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:23.986122 | orchestrator | 2026-04-08 00:53:23 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:23.987774 | orchestrator | 2026-04-08 00:53:23 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:23.990061 | orchestrator | 2026-04-08 00:53:23 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:23.990165 | orchestrator | 2026-04-08 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:27.042446 | orchestrator | 2026-04-08 00:53:27 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:27.043448 | orchestrator | 2026-04-08 00:53:27 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:27.044340 | orchestrator | 2026-04-08 00:53:27 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:27.044381 | orchestrator | 2026-04-08 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:30.101270 | orchestrator | 2026-04-08 00:53:30 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:30.103889 | orchestrator | 2026-04-08 00:53:30 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:30.110721 | orchestrator | 2026-04-08 00:53:30 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:30.110773 | orchestrator | 2026-04-08 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:33.158973 | orchestrator | 2026-04-08 00:53:33 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:33.160759 | orchestrator | 2026-04-08 00:53:33 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:33.163094 | orchestrator | 2026-04-08 00:53:33 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:33.163142 | orchestrator | 2026-04-08 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:36.211869 | orchestrator | 2026-04-08 00:53:36 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:36.212515 | orchestrator | 2026-04-08 00:53:36 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:36.213866 | orchestrator | 2026-04-08 00:53:36 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:36.215292 | orchestrator | 2026-04-08 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:39.269268 | orchestrator | 2026-04-08 00:53:39 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:39.269873 | orchestrator | 2026-04-08 00:53:39 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:39.271631 | orchestrator | 2026-04-08 00:53:39 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:39.271677 | orchestrator | 2026-04-08 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:42.324659 | orchestrator | 2026-04-08 00:53:42 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:42.327306 | orchestrator | 2026-04-08 00:53:42 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:42.329437 | orchestrator | 2026-04-08 00:53:42 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:42.329505 | orchestrator | 2026-04-08 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:45.381399 | orchestrator | 2026-04-08 00:53:45 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:45.383892 | orchestrator | 2026-04-08 00:53:45 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:45.384807 | orchestrator | 2026-04-08 00:53:45 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:45.384875 | orchestrator | 2026-04-08 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:48.437101 | orchestrator | 2026-04-08 00:53:48 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:48.438880 | orchestrator | 2026-04-08 00:53:48 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:48.440752 | orchestrator | 2026-04-08 00:53:48 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:48.441842 | orchestrator | 2026-04-08 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:51.489925 | orchestrator | 2026-04-08 00:53:51 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:51.490050 | orchestrator | 2026-04-08 00:53:51 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:51.491385 | orchestrator | 2026-04-08 00:53:51 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:51.491488 | orchestrator | 2026-04-08 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:54.545083 | orchestrator | 2026-04-08 00:53:54 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:54.546528 | orchestrator | 2026-04-08 00:53:54 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:54.549168 | orchestrator | 2026-04-08 00:53:54 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:54.549260 | orchestrator | 2026-04-08 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:57.599066 | orchestrator | 2026-04-08 00:53:57 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:53:57.602056 | orchestrator | 2026-04-08 00:53:57 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:53:57.603879 | orchestrator | 2026-04-08 00:53:57 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:53:57.603939 | orchestrator | 2026-04-08 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:00.646803 | orchestrator | 2026-04-08 00:54:00 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:54:00.649176 | orchestrator | 2026-04-08 00:54:00 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:00.651729 | orchestrator | 2026-04-08 00:54:00 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:00.651788 | orchestrator | 2026-04-08 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:03.700857 | orchestrator | 2026-04-08 00:54:03 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:54:03.702137 | orchestrator | 2026-04-08 00:54:03 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:03.704135 | orchestrator | 2026-04-08 00:54:03 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:03.704172 | orchestrator | 2026-04-08 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:06.771080 | orchestrator | 2026-04-08 00:54:06 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:54:06.774149 | orchestrator | 2026-04-08 00:54:06 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:06.777668 | orchestrator | 2026-04-08 00:54:06 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:06.777739 | orchestrator | 2026-04-08 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:09.829206 | orchestrator | 2026-04-08 00:54:09 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:54:09.831860 | orchestrator | 2026-04-08 00:54:09 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:09.833303 | orchestrator | 2026-04-08 00:54:09 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:09.833907 | orchestrator | 2026-04-08 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:12.887996 | orchestrator | 2026-04-08 00:54:12 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:54:12.890521 | orchestrator | 2026-04-08 00:54:12 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:12.893129 | orchestrator | 2026-04-08 00:54:12 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:12.893613 | orchestrator | 2026-04-08 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:15.939098 | orchestrator | 2026-04-08 00:54:15 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:54:15.940555 | orchestrator | 2026-04-08 00:54:15 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:15.941373 | orchestrator | 2026-04-08 00:54:15 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:15.941422 | orchestrator | 2026-04-08 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:18.994245 | orchestrator | 2026-04-08 00:54:18 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:54:18.996234 | orchestrator | 2026-04-08 00:54:18 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:18.998862 | orchestrator | 2026-04-08 00:54:18 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:18.998932 | orchestrator | 2026-04-08 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:22.064858 | orchestrator | 2026-04-08 00:54:22 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:54:22.067231 | orchestrator | 2026-04-08 00:54:22 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:22.070350 | orchestrator | 2026-04-08 00:54:22 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:22.070569 | orchestrator | 2026-04-08 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:25.134981 | orchestrator | 2026-04-08 00:54:25 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:54:25.138418 | orchestrator | 2026-04-08 00:54:25 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:25.143287 | orchestrator | 2026-04-08 00:54:25 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:25.143346 | orchestrator | 2026-04-08 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:28.191629 | orchestrator | 2026-04-08 00:54:28 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:54:28.194005 | orchestrator | 2026-04-08 00:54:28 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:28.196334 | orchestrator | 2026-04-08 00:54:28 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:28.196394 | orchestrator | 2026-04-08 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:31.249490 | orchestrator | 2026-04-08 00:54:31 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:54:31.251345 | orchestrator | 2026-04-08 00:54:31 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:31.253332 | orchestrator | 2026-04-08 00:54:31 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:31.253421 | orchestrator | 2026-04-08 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:34.304362 | orchestrator | 2026-04-08 00:54:34 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state STARTED 2026-04-08 00:54:34.304992 | orchestrator | 2026-04-08 00:54:34 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:34.305947 | orchestrator | 2026-04-08 00:54:34 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:34.305980 | orchestrator | 2026-04-08 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:37.359610 | orchestrator | 2026-04-08 00:54:37.359671 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-08 00:54:37.359680 | orchestrator | 2.16.14 2026-04-08 00:54:37.359687 | orchestrator | 2026-04-08 00:54:37.359693 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-08 00:54:37.359700 | orchestrator | 2026-04-08 00:54:37.359706 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-08 00:54:37.359724 | orchestrator | Wednesday 08 April 2026 00:44:06 +0000 (0:00:00.754) 0:00:00.754 ******* 2026-04-08 00:54:37.359731 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.359736 | orchestrator | 2026-04-08 00:54:37.359739 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-08 00:54:37.359743 | orchestrator | Wednesday 08 April 2026 00:44:07 +0000 (0:00:01.242) 0:00:01.996 ******* 2026-04-08 00:54:37.359747 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.359751 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.359755 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.359759 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.359762 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.359766 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.359770 | orchestrator | 2026-04-08 00:54:37.359774 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-08 00:54:37.359778 | orchestrator | Wednesday 08 April 2026 00:44:09 +0000 (0:00:02.282) 0:00:04.278 ******* 2026-04-08 00:54:37.359781 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.359785 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.359789 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.359793 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.359796 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.359800 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.359804 | orchestrator | 2026-04-08 00:54:37.359808 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-08 00:54:37.359812 | orchestrator | Wednesday 08 April 2026 00:44:10 +0000 (0:00:00.630) 0:00:04.909 ******* 2026-04-08 00:54:37.359825 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.359832 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.359837 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.359847 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.359853 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.359860 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.359865 | orchestrator | 2026-04-08 00:54:37.359871 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-08 00:54:37.359877 | orchestrator | Wednesday 08 April 2026 00:44:11 +0000 (0:00:01.006) 0:00:05.915 ******* 2026-04-08 00:54:37.359882 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.359888 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.359910 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.359916 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.359922 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.359929 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.359936 | orchestrator | 2026-04-08 00:54:37.359942 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-08 00:54:37.359948 | orchestrator | Wednesday 08 April 2026 00:44:12 +0000 (0:00:00.701) 0:00:06.616 ******* 2026-04-08 00:54:37.359952 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.359956 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.359960 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.359963 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.359967 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.359971 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.359974 | orchestrator | 2026-04-08 00:54:37.359978 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-08 00:54:37.359982 | orchestrator | Wednesday 08 April 2026 00:44:13 +0000 (0:00:01.063) 0:00:07.680 ******* 2026-04-08 00:54:37.359986 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.359989 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.359993 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.359997 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.360000 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.360004 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.360008 | orchestrator | 2026-04-08 00:54:37.360011 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-08 00:54:37.360015 | orchestrator | Wednesday 08 April 2026 00:44:14 +0000 (0:00:01.375) 0:00:09.055 ******* 2026-04-08 00:54:37.360019 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360023 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.360027 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.360031 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.360034 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.360038 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.360042 | orchestrator | 2026-04-08 00:54:37.360045 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-08 00:54:37.360049 | orchestrator | Wednesday 08 April 2026 00:44:15 +0000 (0:00:01.333) 0:00:10.388 ******* 2026-04-08 00:54:37.360053 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.360057 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.360061 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.360064 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.360068 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.360072 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.360078 | orchestrator | 2026-04-08 00:54:37.360088 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-08 00:54:37.360095 | orchestrator | Wednesday 08 April 2026 00:44:16 +0000 (0:00:01.021) 0:00:11.410 ******* 2026-04-08 00:54:37.360101 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-08 00:54:37.360106 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:54:37.360112 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:54:37.360117 | orchestrator | 2026-04-08 00:54:37.360123 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-08 00:54:37.360129 | orchestrator | Wednesday 08 April 2026 00:44:17 +0000 (0:00:00.587) 0:00:11.997 ******* 2026-04-08 00:54:37.360134 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.360140 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.360146 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.360152 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.360301 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.360312 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.360318 | orchestrator | 2026-04-08 00:54:37.360325 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-08 00:54:37.360332 | orchestrator | Wednesday 08 April 2026 00:44:18 +0000 (0:00:01.474) 0:00:13.472 ******* 2026-04-08 00:54:37.360347 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-08 00:54:37.360359 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:54:37.360366 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:54:37.360373 | orchestrator | 2026-04-08 00:54:37.360378 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-08 00:54:37.360385 | orchestrator | Wednesday 08 April 2026 00:44:22 +0000 (0:00:03.459) 0:00:16.932 ******* 2026-04-08 00:54:37.360390 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-08 00:54:37.360394 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-08 00:54:37.360398 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-08 00:54:37.360402 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360405 | orchestrator | 2026-04-08 00:54:37.360409 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-08 00:54:37.360413 | orchestrator | Wednesday 08 April 2026 00:44:22 +0000 (0:00:00.450) 0:00:17.383 ******* 2026-04-08 00:54:37.360418 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.360423 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.360427 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.360431 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360437 | orchestrator | 2026-04-08 00:54:37.360443 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-08 00:54:37.360447 | orchestrator | Wednesday 08 April 2026 00:44:24 +0000 (0:00:02.031) 0:00:19.414 ******* 2026-04-08 00:54:37.360452 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.360458 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.360462 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.360466 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360469 | orchestrator | 2026-04-08 00:54:37.360473 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-08 00:54:37.360477 | orchestrator | Wednesday 08 April 2026 00:44:25 +0000 (0:00:00.387) 0:00:19.802 ******* 2026-04-08 00:54:37.360487 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-08 00:44:19.896034', 'end': '2026-04-08 00:44:20.021526', 'delta': '0:00:00.125492', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.360498 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-08 00:44:20.930559', 'end': '2026-04-08 00:44:21.045329', 'delta': '0:00:00.114770', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.360503 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-08 00:44:22.067535', 'end': '2026-04-08 00:44:22.173978', 'delta': '0:00:00.106443', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.360507 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360510 | orchestrator | 2026-04-08 00:54:37.360514 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-08 00:54:37.360518 | orchestrator | Wednesday 08 April 2026 00:44:25 +0000 (0:00:00.717) 0:00:20.519 ******* 2026-04-08 00:54:37.360522 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.360526 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.360529 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.360533 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.360537 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.360541 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.360544 | orchestrator | 2026-04-08 00:54:37.360548 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-08 00:54:37.360552 | orchestrator | Wednesday 08 April 2026 00:44:29 +0000 (0:00:03.906) 0:00:24.426 ******* 2026-04-08 00:54:37.360556 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.360559 | orchestrator | 2026-04-08 00:54:37.360563 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-08 00:54:37.360567 | orchestrator | Wednesday 08 April 2026 00:44:30 +0000 (0:00:00.613) 0:00:25.040 ******* 2026-04-08 00:54:37.360571 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360575 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.360578 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.360582 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.360586 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.360589 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.360593 | orchestrator | 2026-04-08 00:54:37.360597 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-08 00:54:37.360601 | orchestrator | Wednesday 08 April 2026 00:44:31 +0000 (0:00:01.085) 0:00:26.125 ******* 2026-04-08 00:54:37.360607 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360611 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.360615 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.360619 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.360623 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.360626 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.360630 | orchestrator | 2026-04-08 00:54:37.360636 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-08 00:54:37.360641 | orchestrator | Wednesday 08 April 2026 00:44:32 +0000 (0:00:01.336) 0:00:27.461 ******* 2026-04-08 00:54:37.360645 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360649 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.360653 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.360656 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.360660 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.360664 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.360667 | orchestrator | 2026-04-08 00:54:37.360671 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-08 00:54:37.360675 | orchestrator | Wednesday 08 April 2026 00:44:33 +0000 (0:00:01.102) 0:00:28.564 ******* 2026-04-08 00:54:37.360678 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360682 | orchestrator | 2026-04-08 00:54:37.360686 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-08 00:54:37.360690 | orchestrator | Wednesday 08 April 2026 00:44:34 +0000 (0:00:00.184) 0:00:28.749 ******* 2026-04-08 00:54:37.360693 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360697 | orchestrator | 2026-04-08 00:54:37.360701 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-08 00:54:37.360705 | orchestrator | Wednesday 08 April 2026 00:44:34 +0000 (0:00:00.263) 0:00:29.013 ******* 2026-04-08 00:54:37.360708 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360712 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.360716 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.360719 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.360723 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.360727 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.360730 | orchestrator | 2026-04-08 00:54:37.360737 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-08 00:54:37.360741 | orchestrator | Wednesday 08 April 2026 00:44:35 +0000 (0:00:00.756) 0:00:29.770 ******* 2026-04-08 00:54:37.360744 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360748 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.360752 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.360756 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.360762 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.360766 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.360769 | orchestrator | 2026-04-08 00:54:37.360773 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-08 00:54:37.360777 | orchestrator | Wednesday 08 April 2026 00:44:36 +0000 (0:00:00.994) 0:00:30.764 ******* 2026-04-08 00:54:37.360781 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360784 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.360788 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.360792 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.360796 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.360799 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.360803 | orchestrator | 2026-04-08 00:54:37.360807 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-08 00:54:37.360810 | orchestrator | Wednesday 08 April 2026 00:44:37 +0000 (0:00:00.938) 0:00:31.703 ******* 2026-04-08 00:54:37.360814 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360818 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.360825 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.360828 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.360832 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.360836 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.360840 | orchestrator | 2026-04-08 00:54:37.360843 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-08 00:54:37.360847 | orchestrator | Wednesday 08 April 2026 00:44:38 +0000 (0:00:01.041) 0:00:32.744 ******* 2026-04-08 00:54:37.360851 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360855 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.360858 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.360862 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.360866 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.360870 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.360874 | orchestrator | 2026-04-08 00:54:37.360881 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-08 00:54:37.360890 | orchestrator | Wednesday 08 April 2026 00:44:38 +0000 (0:00:00.597) 0:00:33.341 ******* 2026-04-08 00:54:37.360897 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360903 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.360909 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.360915 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.360921 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.360926 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.360931 | orchestrator | 2026-04-08 00:54:37.360937 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-08 00:54:37.360943 | orchestrator | Wednesday 08 April 2026 00:44:39 +0000 (0:00:00.975) 0:00:34.317 ******* 2026-04-08 00:54:37.360948 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.360954 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.360960 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.360966 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.360971 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.360976 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.360982 | orchestrator | 2026-04-08 00:54:37.360988 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-08 00:54:37.360994 | orchestrator | Wednesday 08 April 2026 00:44:40 +0000 (0:00:01.056) 0:00:35.374 ******* 2026-04-08 00:54:37.361001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4', 'scsi-SQEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part1', 'scsi-SQEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part14', 'scsi-SQEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part15', 'scsi-SQEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part16', 'scsi-SQEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.361137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.361142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361191 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.361200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1', 'scsi-SQEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.361206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.361270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717', 'scsi-SQEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part1', 'scsi-SQEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part14', 'scsi-SQEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part15', 'scsi-SQEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part16', 'scsi-SQEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.361335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'r2026-04-08 00:54:37 | INFO  | Task c0473fbb-9133-45e9-b3d1-285ceebcf6d7 is in state SUCCESS 2026-04-08 00:54:37.361620 | orchestrator | otational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.361774 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.361784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bf49c8a6--5f7f--52ec--8321--922f51127285-osd--block--bf49c8a6--5f7f--52ec--8321--922f51127285', 'dm-uuid-LVM-DCtP4WqFyDlImNS25WUpBspIXbQ4b0MseJNdmaqBSWhvH3Znhvwkh6UD8M5v6au3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--42db71c5--e51d--540c--8fbe--0cd4e432c3d3-osd--block--42db71c5--e51d--540c--8fbe--0cd4e432c3d3', 'dm-uuid-LVM-BAlq3j3YZdEKD1c9X4cS0qsBF7TBXnmKdS3aHAqRkuDb5fBHAv2rWwtm6NolRKSw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361830 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.361837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--31d7fbda--737c--5413--835b--7dea8c782162-osd--block--31d7fbda--737c--5413--835b--7dea8c782162', 'dm-uuid-LVM-6l4kJSOv0R94h2yRg4PqmHo3vUKfeSF5I1LaWdvSHWsEWdizfAL30P0VjYcyBq5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part1', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part14', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part15', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part16', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.361873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6d74f3d8--bff6--5917--9df4--f8420d533035-osd--block--6d74f3d8--bff6--5917--9df4--f8420d533035', 'dm-uuid-LVM-4l8XNG7D4K7HeOCdF199MCfOBuuofcWRFyfQpVHgpdArkKYJbUiWvAU03VsAlqZ2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bf49c8a6--5f7f--52ec--8321--922f51127285-osd--block--bf49c8a6--5f7f--52ec--8321--922f51127285'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-82M0gD-twSo-xX2e-GnNF-pWTz-pR9F-4A2iHp', 'scsi-0QEMU_QEMU_HARDDISK_d0f6de66-4fec-4fd7-97e2-1741dd54f232', 'scsi-SQEMU_QEMU_HARDDISK_d0f6de66-4fec-4fd7-97e2-1741dd54f232'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.361932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.361961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part1', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part14', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part15', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part16', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.361966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--42db71c5--e51d--540c--8fbe--0cd4e432c3d3-osd--block--42db71c5--e51d--540c--8fbe--0cd4e432c3d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xrVDS7-YlYj-c8pa-CWN3-w6TN-zGlL-9Yq4AT', 'scsi-0QEMU_QEMU_HARDDISK_7b23824a-491e-4dc1-9823-22fa2ac48d76', 'scsi-SQEMU_QEMU_HARDDISK_7b23824a-491e-4dc1-9823-22fa2ac48d76'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.361973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--31d7fbda--737c--5413--835b--7dea8c782162-osd--block--31d7fbda--737c--5413--835b--7dea8c782162'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uNimtB-lzSd-iWW4-fWZp-LNYc-tskR-lU4ln0', 'scsi-0QEMU_QEMU_HARDDISK_706accd8-4e49-4054-bb21-fde08475a707', 'scsi-SQEMU_QEMU_HARDDISK_706accd8-4e49-4054-bb21-fde08475a707'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.361981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6d74f3d8--bff6--5917--9df4--f8420d533035-osd--block--6d74f3d8--bff6--5917--9df4--f8420d533035'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-33A23c-Engk-9daO-jSPu-Tl11-ekzq-Jb8fW0', 'scsi-0QEMU_QEMU_HARDDISK_f8a75de5-2ee8-4f26-b825-06a074879466', 'scsi-SQEMU_QEMU_HARDDISK_f8a75de5-2ee8-4f26-b825-06a074879466'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.361986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8171b98-d766-41eb-84f8-e0c6f3fec117', 'scsi-SQEMU_QEMU_HARDDISK_a8171b98-d766-41eb-84f8-e0c6f3fec117'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.361990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c872331-8a67-44e1-93cf-3b447520d047', 'scsi-SQEMU_QEMU_HARDDISK_5c872331-8a67-44e1-93cf-3b447520d047'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.361995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.362002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.362006 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.362010 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.362049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2a42094--2be0--50d9--ab62--bd2425088ba2-osd--block--d2a42094--2be0--50d9--ab62--bd2425088ba2', 'dm-uuid-LVM-4dOidnlTm9bAFU1bQbvhIfmV07E14tCv27YsQyeErGXnbtNwmdHoqxbHs0BmwtP4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.362057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed835e4d--3c58--59bb--af9d--6d23bfbc2494-osd--block--ed835e4d--3c58--59bb--af9d--6d23bfbc2494', 'dm-uuid-LVM-ZOWwtAXmXVeZGdA6c4d19phCtA4iFWHEWLP3dDLMb4oHu8JWx5caD1wehFycts3r'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.362065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.362069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.362073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.362077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.362081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.362088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.362092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.362096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:37.362105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part1', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part14', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part15', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part16', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.362111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d2a42094--2be0--50d9--ab62--bd2425088ba2-osd--block--d2a42094--2be0--50d9--ab62--bd2425088ba2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YSmTwt-CYiQ-X7jk-JP2g-hMRT-5ooj-Q2UMoO', 'scsi-0QEMU_QEMU_HARDDISK_bf03eb4f-be44-4071-9b80-940b5dcac70f', 'scsi-SQEMU_QEMU_HARDDISK_bf03eb4f-be44-4071-9b80-940b5dcac70f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.362136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ed835e4d--3c58--59bb--af9d--6d23bfbc2494-osd--block--ed835e4d--3c58--59bb--af9d--6d23bfbc2494'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-72ezyk-TfkG-aAnx-KaAz-mKlf-IuXi-AZeHcs', 'scsi-0QEMU_QEMU_HARDDISK_6d0a5819-af6a-4d5a-b5d8-55d4de9ca567', 'scsi-SQEMU_QEMU_HARDDISK_6d0a5819-af6a-4d5a-b5d8-55d4de9ca567'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.362143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0911be4c-6cd6-4ed2-95f2-3749c0002df5', 'scsi-SQEMU_QEMU_HARDDISK_0911be4c-6cd6-4ed2-95f2-3749c0002df5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.362153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:37.362160 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.362166 | orchestrator | 2026-04-08 00:54:37.362176 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-08 00:54:37.362184 | orchestrator | Wednesday 08 April 2026 00:44:43 +0000 (0:00:02.445) 0:00:37.819 ******* 2026-04-08 00:54:37.362190 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362198 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362209 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362215 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362221 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362225 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362234 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362239 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362244 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4', 'scsi-SQEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part1', 'scsi-SQEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part14', 'scsi-SQEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part15', 'scsi-SQEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part16', 'scsi-SQEMU_QEMU_HARDDISK_6dc1d032-8cd5-4ab4-b457-5c11f59554f4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362343 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362360 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362365 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362373 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362378 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362383 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362387 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362397 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362402 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362407 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1', 'scsi-SQEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a79b7c9-a563-4433-b8f2-12de991d52c1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362415 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362419 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.362429 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362434 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362440 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362454 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362459 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362468 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362472 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.362482 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362487 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362496 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717', 'scsi-SQEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part1', 'scsi-SQEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part14', 'scsi-SQEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part15', 'scsi-SQEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part16', 'scsi-SQEMU_QEMU_HARDDISK_b329422d-da47-45a8-ac99-562cc2d58717-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362501 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362511 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bf49c8a6--5f7f--52ec--8321--922f51127285-osd--block--bf49c8a6--5f7f--52ec--8321--922f51127285', 'dm-uuid-LVM-DCtP4WqFyDlImNS25WUpBspIXbQ4b0MseJNdmaqBSWhvH3Znhvwkh6UD8M5v6au3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362519 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--42db71c5--e51d--540c--8fbe--0cd4e432c3d3-osd--block--42db71c5--e51d--540c--8fbe--0cd4e432c3d3', 'dm-uuid-LVM-BAlq3j3YZdEKD1c9X4cS0qsBF7TBXnmKdS3aHAqRkuDb5fBHAv2rWwtm6NolRKSw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362524 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362528 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.362533 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362538 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362542 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362602 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--31d7fbda--737c--5413--835b--7dea8c782162-osd--block--31d7fbda--737c--5413--835b--7dea8c782162', 'dm-uuid-LVM-6l4kJSOv0R94h2yRg4PqmHo3vUKfeSF5I1LaWdvSHWsEWdizfAL30P0VjYcyBq5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362611 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362615 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6d74f3d8--bff6--5917--9df4--f8420d533035-osd--block--6d74f3d8--bff6--5917--9df4--f8420d533035', 'dm-uuid-LVM-4l8XNG7D4K7HeOCdF199MCfOBuuofcWRFyfQpVHgpdArkKYJbUiWvAU03VsAlqZ2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362620 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362629 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362638 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362646 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362650 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362672 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.362677 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part1', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part14', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part15', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part16', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363211 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363215 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2a42094--2be0--50d9--ab62--bd2425088ba2-osd--block--d2a42094--2be0--50d9--ab62--bd2425088ba2', 'dm-uuid-LVM-4dOidnlTm9bAFU1bQbvhIfmV07E14tCv27YsQyeErGXnbtNwmdHoqxbHs0BmwtP4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363220 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363224 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bf49c8a6--5f7f--52ec--8321--922f51127285-osd--block--bf49c8a6--5f7f--52ec--8321--922f51127285'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-82M0gD-twSo-xX2e-GnNF-pWTz-pR9F-4A2iHp', 'scsi-0QEMU_QEMU_HARDDISK_d0f6de66-4fec-4fd7-97e2-1741dd54f232', 'scsi-SQEMU_QEMU_HARDDISK_d0f6de66-4fec-4fd7-97e2-1741dd54f232'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363234 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed835e4d--3c58--59bb--af9d--6d23bfbc2494-osd--block--ed835e4d--3c58--59bb--af9d--6d23bfbc2494', 'dm-uuid-LVM-ZOWwtAXmXVeZGdA6c4d19phCtA4iFWHEWLP3dDLMb4oHu8JWx5caD1wehFycts3r'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363240 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--42db71c5--e51d--540c--8fbe--0cd4e432c3d3-osd--block--42db71c5--e51d--540c--8fbe--0cd4e432c3d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xrVDS7-YlYj-c8pa-CWN3-w6TN-zGlL-9Yq4AT', 'scsi-0QEMU_QEMU_HARDDISK_7b23824a-491e-4dc1-9823-22fa2ac48d76', 'scsi-SQEMU_QEMU_HARDDISK_7b23824a-491e-4dc1-9823-22fa2ac48d76'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363248 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363266 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8171b98-d766-41eb-84f8-e0c6f3fec117', 'scsi-SQEMU_QEMU_HARDDISK_a8171b98-d766-41eb-84f8-e0c6f3fec117'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363279 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363287 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.363291 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part1', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part14', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part15', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part16', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363295 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363299 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363310 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363315 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--31d7fbda--737c--5413--835b--7dea8c782162-osd--block--31d7fbda--737c--5413--835b--7dea8c782162'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uNimtB-lzSd-iWW4-fWZp-LNYc-tskR-lU4ln0', 'scsi-0QEMU_QEMU_HARDDISK_706accd8-4e49-4054-bb21-fde08475a707', 'scsi-SQEMU_QEMU_HARDDISK_706accd8-4e49-4054-bb21-fde08475a707'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363319 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6d74f3d8--bff6--5917--9df4--f8420d533035-osd--block--6d74f3d8--bff6--5917--9df4--f8420d533035'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-33A23c-Engk-9daO-jSPu-Tl11-ekzq-Jb8fW0', 'scsi-0QEMU_QEMU_HARDDISK_f8a75de5-2ee8-4f26-b825-06a074879466', 'scsi-SQEMU_QEMU_HARDDISK_f8a75de5-2ee8-4f26-b825-06a074879466'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363323 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363327 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c872331-8a67-44e1-93cf-3b447520d047', 'scsi-SQEMU_QEMU_HARDDISK_5c872331-8a67-44e1-93cf-3b447520d047'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363340 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363344 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.363348 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363352 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363356 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363365 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part1', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part14', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part15', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part16', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363373 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d2a42094--2be0--50d9--ab62--bd2425088ba2-osd--block--d2a42094--2be0--50d9--ab62--bd2425088ba2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YSmTwt-CYiQ-X7jk-JP2g-hMRT-5ooj-Q2UMoO', 'scsi-0QEMU_QEMU_HARDDISK_bf03eb4f-be44-4071-9b80-940b5dcac70f', 'scsi-SQEMU_QEMU_HARDDISK_bf03eb4f-be44-4071-9b80-940b5dcac70f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363377 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ed835e4d--3c58--59bb--af9d--6d23bfbc2494-osd--block--ed835e4d--3c58--59bb--af9d--6d23bfbc2494'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-72ezyk-TfkG-aAnx-KaAz-mKlf-IuXi-AZeHcs', 'scsi-0QEMU_QEMU_HARDDISK_6d0a5819-af6a-4d5a-b5d8-55d4de9ca567', 'scsi-SQEMU_QEMU_HARDDISK_6d0a5819-af6a-4d5a-b5d8-55d4de9ca567'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363381 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0911be4c-6cd6-4ed2-95f2-3749c0002df5', 'scsi-SQEMU_QEMU_HARDDISK_0911be4c-6cd6-4ed2-95f2-3749c0002df5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363385 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:37.363391 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.363395 | orchestrator | 2026-04-08 00:54:37.363398 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-08 00:54:37.363403 | orchestrator | Wednesday 08 April 2026 00:44:45 +0000 (0:00:02.000) 0:00:39.819 ******* 2026-04-08 00:54:37.363409 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.363413 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.363417 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.363421 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.363425 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.363428 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.363432 | orchestrator | 2026-04-08 00:54:37.363438 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-08 00:54:37.363442 | orchestrator | Wednesday 08 April 2026 00:44:46 +0000 (0:00:01.390) 0:00:41.210 ******* 2026-04-08 00:54:37.363446 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.363449 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.363453 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.363457 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.363460 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.363464 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.363468 | orchestrator | 2026-04-08 00:54:37.363471 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-08 00:54:37.363475 | orchestrator | Wednesday 08 April 2026 00:44:47 +0000 (0:00:00.814) 0:00:42.024 ******* 2026-04-08 00:54:37.363479 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.363483 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.363486 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.363490 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.363494 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.363498 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.363501 | orchestrator | 2026-04-08 00:54:37.363505 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-08 00:54:37.363509 | orchestrator | Wednesday 08 April 2026 00:44:48 +0000 (0:00:00.868) 0:00:42.893 ******* 2026-04-08 00:54:37.363512 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.363516 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.363520 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.363524 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.363527 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.363531 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.363535 | orchestrator | 2026-04-08 00:54:37.363538 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-08 00:54:37.363542 | orchestrator | Wednesday 08 April 2026 00:44:49 +0000 (0:00:01.012) 0:00:43.906 ******* 2026-04-08 00:54:37.363546 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.363549 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.363553 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.363557 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.363560 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.363564 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.363568 | orchestrator | 2026-04-08 00:54:37.363574 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-08 00:54:37.363579 | orchestrator | Wednesday 08 April 2026 00:44:50 +0000 (0:00:01.647) 0:00:45.554 ******* 2026-04-08 00:54:37.363590 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.363596 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.363602 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.363607 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.363613 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.363620 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.363626 | orchestrator | 2026-04-08 00:54:37.363632 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-08 00:54:37.363638 | orchestrator | Wednesday 08 April 2026 00:44:52 +0000 (0:00:01.052) 0:00:46.606 ******* 2026-04-08 00:54:37.363646 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-08 00:54:37.363650 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-08 00:54:37.363654 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-08 00:54:37.363657 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-08 00:54:37.363663 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-08 00:54:37.363669 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-08 00:54:37.363674 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-08 00:54:37.363684 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-08 00:54:37.363693 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-08 00:54:37.363698 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-08 00:54:37.363704 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-08 00:54:37.363709 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-08 00:54:37.363715 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-08 00:54:37.363721 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-08 00:54:37.363726 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-08 00:54:37.363732 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-08 00:54:37.363738 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-08 00:54:37.363743 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-08 00:54:37.363749 | orchestrator | 2026-04-08 00:54:37.363754 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-08 00:54:37.363759 | orchestrator | Wednesday 08 April 2026 00:44:55 +0000 (0:00:03.623) 0:00:50.229 ******* 2026-04-08 00:54:37.363765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-08 00:54:37.363771 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-08 00:54:37.363777 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-08 00:54:37.363783 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.363789 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-08 00:54:37.363795 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-08 00:54:37.363801 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-08 00:54:37.363807 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.363813 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-08 00:54:37.363820 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-08 00:54:37.363831 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-08 00:54:37.363838 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.363845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-08 00:54:37.363850 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-08 00:54:37.363857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-08 00:54:37.363861 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.363866 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-08 00:54:37.363870 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-08 00:54:37.363874 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-08 00:54:37.363883 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.363887 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-08 00:54:37.363891 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-08 00:54:37.363895 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-08 00:54:37.363900 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.363918 | orchestrator | 2026-04-08 00:54:37.363922 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-08 00:54:37.363927 | orchestrator | Wednesday 08 April 2026 00:44:57 +0000 (0:00:01.480) 0:00:51.710 ******* 2026-04-08 00:54:37.363931 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.363935 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.363940 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.363944 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.363949 | orchestrator | 2026-04-08 00:54:37.363953 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-08 00:54:37.363958 | orchestrator | Wednesday 08 April 2026 00:44:58 +0000 (0:00:01.616) 0:00:53.326 ******* 2026-04-08 00:54:37.363962 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.363966 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.363970 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.363975 | orchestrator | 2026-04-08 00:54:37.363979 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-08 00:54:37.363983 | orchestrator | Wednesday 08 April 2026 00:44:59 +0000 (0:00:00.451) 0:00:53.778 ******* 2026-04-08 00:54:37.363988 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.363992 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.363996 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.364000 | orchestrator | 2026-04-08 00:54:37.364005 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-08 00:54:37.364009 | orchestrator | Wednesday 08 April 2026 00:44:59 +0000 (0:00:00.454) 0:00:54.232 ******* 2026-04-08 00:54:37.364014 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.364018 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.364022 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.364026 | orchestrator | 2026-04-08 00:54:37.364031 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-08 00:54:37.364035 | orchestrator | Wednesday 08 April 2026 00:45:00 +0000 (0:00:00.392) 0:00:54.625 ******* 2026-04-08 00:54:37.364039 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.364044 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.364048 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.364053 | orchestrator | 2026-04-08 00:54:37.364059 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-08 00:54:37.364065 | orchestrator | Wednesday 08 April 2026 00:45:00 +0000 (0:00:00.894) 0:00:55.520 ******* 2026-04-08 00:54:37.364075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:54:37.364082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:54:37.364088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:54:37.364094 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.364100 | orchestrator | 2026-04-08 00:54:37.364106 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-08 00:54:37.364112 | orchestrator | Wednesday 08 April 2026 00:45:01 +0000 (0:00:00.397) 0:00:55.917 ******* 2026-04-08 00:54:37.364119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:54:37.364125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:54:37.364132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:54:37.364138 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.364144 | orchestrator | 2026-04-08 00:54:37.364156 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-08 00:54:37.364162 | orchestrator | Wednesday 08 April 2026 00:45:01 +0000 (0:00:00.519) 0:00:56.436 ******* 2026-04-08 00:54:37.364168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:54:37.364173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:54:37.364179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:54:37.364194 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.364200 | orchestrator | 2026-04-08 00:54:37.364206 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-08 00:54:37.364212 | orchestrator | Wednesday 08 April 2026 00:45:02 +0000 (0:00:00.398) 0:00:56.835 ******* 2026-04-08 00:54:37.364218 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.364224 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.364230 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.364237 | orchestrator | 2026-04-08 00:54:37.364242 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-08 00:54:37.364249 | orchestrator | Wednesday 08 April 2026 00:45:02 +0000 (0:00:00.375) 0:00:57.210 ******* 2026-04-08 00:54:37.364360 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-08 00:54:37.364365 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-08 00:54:37.364369 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-08 00:54:37.364373 | orchestrator | 2026-04-08 00:54:37.364382 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-08 00:54:37.364386 | orchestrator | Wednesday 08 April 2026 00:45:03 +0000 (0:00:01.278) 0:00:58.489 ******* 2026-04-08 00:54:37.364390 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-08 00:54:37.364399 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:54:37.364403 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:54:37.364407 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-08 00:54:37.364411 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-08 00:54:37.364415 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-08 00:54:37.364418 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-08 00:54:37.364422 | orchestrator | 2026-04-08 00:54:37.364426 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-08 00:54:37.364430 | orchestrator | Wednesday 08 April 2026 00:45:05 +0000 (0:00:01.806) 0:01:00.296 ******* 2026-04-08 00:54:37.364433 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-08 00:54:37.364437 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:54:37.364441 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:54:37.364444 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-08 00:54:37.364448 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-08 00:54:37.364452 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-08 00:54:37.364456 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-08 00:54:37.364459 | orchestrator | 2026-04-08 00:54:37.364463 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-08 00:54:37.364467 | orchestrator | Wednesday 08 April 2026 00:45:07 +0000 (0:00:01.934) 0:01:02.231 ******* 2026-04-08 00:54:37.364471 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.364475 | orchestrator | 2026-04-08 00:54:37.364479 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-08 00:54:37.364488 | orchestrator | Wednesday 08 April 2026 00:45:08 +0000 (0:00:01.128) 0:01:03.359 ******* 2026-04-08 00:54:37.364492 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.364496 | orchestrator | 2026-04-08 00:54:37.364500 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-08 00:54:37.364504 | orchestrator | Wednesday 08 April 2026 00:45:10 +0000 (0:00:01.703) 0:01:05.063 ******* 2026-04-08 00:54:37.364507 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.364512 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.364518 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.364523 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.364532 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.364541 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.364547 | orchestrator | 2026-04-08 00:54:37.364552 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-08 00:54:37.364558 | orchestrator | Wednesday 08 April 2026 00:45:11 +0000 (0:00:01.373) 0:01:06.437 ******* 2026-04-08 00:54:37.364564 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.364570 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.364575 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.364581 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.364587 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.364592 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.364599 | orchestrator | 2026-04-08 00:54:37.364606 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-08 00:54:37.364612 | orchestrator | Wednesday 08 April 2026 00:45:13 +0000 (0:00:01.431) 0:01:07.868 ******* 2026-04-08 00:54:37.364618 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.364624 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.364630 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.364636 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.364640 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.364644 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.364647 | orchestrator | 2026-04-08 00:54:37.364651 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-08 00:54:37.364655 | orchestrator | Wednesday 08 April 2026 00:45:15 +0000 (0:00:01.765) 0:01:09.633 ******* 2026-04-08 00:54:37.364659 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.364662 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.364666 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.364670 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.364674 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.364677 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.364681 | orchestrator | 2026-04-08 00:54:37.364685 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-08 00:54:37.364689 | orchestrator | Wednesday 08 April 2026 00:45:17 +0000 (0:00:02.361) 0:01:11.995 ******* 2026-04-08 00:54:37.364692 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.364696 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.364700 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.364704 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.364707 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.364711 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.364715 | orchestrator | 2026-04-08 00:54:37.364718 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-08 00:54:37.364726 | orchestrator | Wednesday 08 April 2026 00:45:19 +0000 (0:00:01.660) 0:01:13.655 ******* 2026-04-08 00:54:37.364730 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.364733 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.364737 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.364744 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.364751 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.364755 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.364759 | orchestrator | 2026-04-08 00:54:37.364763 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-08 00:54:37.364766 | orchestrator | Wednesday 08 April 2026 00:45:20 +0000 (0:00:01.247) 0:01:14.903 ******* 2026-04-08 00:54:37.364770 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.364774 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.364777 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.364781 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.364785 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.364789 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.364792 | orchestrator | 2026-04-08 00:54:37.364796 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-08 00:54:37.364800 | orchestrator | Wednesday 08 April 2026 00:45:21 +0000 (0:00:00.870) 0:01:15.774 ******* 2026-04-08 00:54:37.364804 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.364807 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.364811 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.364822 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.364826 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.364834 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.364838 | orchestrator | 2026-04-08 00:54:37.364842 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-08 00:54:37.364846 | orchestrator | Wednesday 08 April 2026 00:45:22 +0000 (0:00:01.686) 0:01:17.460 ******* 2026-04-08 00:54:37.364849 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.364853 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.364857 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.364860 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.364864 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.364868 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.364871 | orchestrator | 2026-04-08 00:54:37.364875 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-08 00:54:37.364879 | orchestrator | Wednesday 08 April 2026 00:45:24 +0000 (0:00:01.396) 0:01:18.857 ******* 2026-04-08 00:54:37.364883 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.364886 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.364890 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.364894 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.364897 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.364901 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.364905 | orchestrator | 2026-04-08 00:54:37.364908 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-08 00:54:37.364912 | orchestrator | Wednesday 08 April 2026 00:45:25 +0000 (0:00:01.225) 0:01:20.083 ******* 2026-04-08 00:54:37.364916 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.364920 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.364923 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.364927 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.364931 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.364934 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.364938 | orchestrator | 2026-04-08 00:54:37.364942 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-08 00:54:37.364945 | orchestrator | Wednesday 08 April 2026 00:45:26 +0000 (0:00:01.169) 0:01:21.253 ******* 2026-04-08 00:54:37.364949 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.364953 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.364957 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.364960 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.364964 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.364968 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.364971 | orchestrator | 2026-04-08 00:54:37.364975 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-08 00:54:37.364979 | orchestrator | Wednesday 08 April 2026 00:45:27 +0000 (0:00:01.069) 0:01:22.322 ******* 2026-04-08 00:54:37.364986 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.364989 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.364993 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.364997 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.365001 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.365004 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.365008 | orchestrator | 2026-04-08 00:54:37.365012 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-08 00:54:37.365015 | orchestrator | Wednesday 08 April 2026 00:45:28 +0000 (0:00:00.870) 0:01:23.193 ******* 2026-04-08 00:54:37.365019 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.365023 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.365027 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.365030 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.365034 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.365038 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.365041 | orchestrator | 2026-04-08 00:54:37.365045 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-08 00:54:37.365049 | orchestrator | Wednesday 08 April 2026 00:45:29 +0000 (0:00:01.284) 0:01:24.478 ******* 2026-04-08 00:54:37.365053 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.365056 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.365060 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.365064 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.365067 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.365071 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.365075 | orchestrator | 2026-04-08 00:54:37.365078 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-08 00:54:37.365082 | orchestrator | Wednesday 08 April 2026 00:45:30 +0000 (0:00:00.898) 0:01:25.377 ******* 2026-04-08 00:54:37.365086 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.365090 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.365093 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.365097 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.365101 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.365104 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.365108 | orchestrator | 2026-04-08 00:54:37.365114 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-08 00:54:37.365118 | orchestrator | Wednesday 08 April 2026 00:45:32 +0000 (0:00:01.468) 0:01:26.845 ******* 2026-04-08 00:54:37.365122 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.365126 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.365134 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.365138 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.365142 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.365146 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.365149 | orchestrator | 2026-04-08 00:54:37.365153 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-08 00:54:37.365157 | orchestrator | Wednesday 08 April 2026 00:45:33 +0000 (0:00:00.812) 0:01:27.657 ******* 2026-04-08 00:54:37.365161 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.365164 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.365168 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.365172 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.365175 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.365179 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.365183 | orchestrator | 2026-04-08 00:54:37.365186 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-08 00:54:37.365190 | orchestrator | Wednesday 08 April 2026 00:45:34 +0000 (0:00:01.126) 0:01:28.783 ******* 2026-04-08 00:54:37.365194 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.365198 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.365201 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.365207 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.365211 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.365215 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.365218 | orchestrator | 2026-04-08 00:54:37.365222 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-08 00:54:37.365226 | orchestrator | Wednesday 08 April 2026 00:45:35 +0000 (0:00:01.800) 0:01:30.584 ******* 2026-04-08 00:54:37.365230 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.365233 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.365237 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.365241 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.365244 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.365248 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.365265 | orchestrator | 2026-04-08 00:54:37.365272 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-08 00:54:37.365278 | orchestrator | Wednesday 08 April 2026 00:45:37 +0000 (0:00:01.654) 0:01:32.238 ******* 2026-04-08 00:54:37.365284 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.365290 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.365297 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.365301 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.365304 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.365308 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.365312 | orchestrator | 2026-04-08 00:54:37.365315 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-08 00:54:37.365319 | orchestrator | Wednesday 08 April 2026 00:45:40 +0000 (0:00:02.538) 0:01:34.776 ******* 2026-04-08 00:54:37.365323 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.365327 | orchestrator | 2026-04-08 00:54:37.365330 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-08 00:54:37.365334 | orchestrator | Wednesday 08 April 2026 00:45:41 +0000 (0:00:01.134) 0:01:35.911 ******* 2026-04-08 00:54:37.365338 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.365342 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.365345 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.365349 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.365353 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.365356 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.365360 | orchestrator | 2026-04-08 00:54:37.365364 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-08 00:54:37.365367 | orchestrator | Wednesday 08 April 2026 00:45:41 +0000 (0:00:00.525) 0:01:36.436 ******* 2026-04-08 00:54:37.365371 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.365375 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.365378 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.365382 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.365386 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.365389 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.365393 | orchestrator | 2026-04-08 00:54:37.365397 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-08 00:54:37.365400 | orchestrator | Wednesday 08 April 2026 00:45:42 +0000 (0:00:00.727) 0:01:37.163 ******* 2026-04-08 00:54:37.365404 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-08 00:54:37.365408 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-08 00:54:37.365411 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-08 00:54:37.365415 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-08 00:54:37.365419 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-08 00:54:37.365422 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-08 00:54:37.365429 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-08 00:54:37.365433 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-08 00:54:37.365437 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-08 00:54:37.365440 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-08 00:54:37.365444 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-08 00:54:37.365450 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-08 00:54:37.365454 | orchestrator | 2026-04-08 00:54:37.365458 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-08 00:54:37.365464 | orchestrator | Wednesday 08 April 2026 00:45:44 +0000 (0:00:01.639) 0:01:38.803 ******* 2026-04-08 00:54:37.365468 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.365472 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.365475 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.365479 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.365483 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.365487 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.365490 | orchestrator | 2026-04-08 00:54:37.365494 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-08 00:54:37.365498 | orchestrator | Wednesday 08 April 2026 00:45:45 +0000 (0:00:01.316) 0:01:40.119 ******* 2026-04-08 00:54:37.365501 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.365505 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.365509 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.365513 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.365516 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.365520 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.365524 | orchestrator | 2026-04-08 00:54:37.365527 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-08 00:54:37.365531 | orchestrator | Wednesday 08 April 2026 00:45:46 +0000 (0:00:00.839) 0:01:40.959 ******* 2026-04-08 00:54:37.365535 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.365539 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.365542 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.365546 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.365550 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.365553 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.365557 | orchestrator | 2026-04-08 00:54:37.365561 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-08 00:54:37.365564 | orchestrator | Wednesday 08 April 2026 00:45:47 +0000 (0:00:00.941) 0:01:41.900 ******* 2026-04-08 00:54:37.365568 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.365572 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.365575 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.365579 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.365583 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.365587 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.365590 | orchestrator | 2026-04-08 00:54:37.365594 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-08 00:54:37.365598 | orchestrator | Wednesday 08 April 2026 00:45:47 +0000 (0:00:00.664) 0:01:42.564 ******* 2026-04-08 00:54:37.365602 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.365606 | orchestrator | 2026-04-08 00:54:37.365609 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-08 00:54:37.365613 | orchestrator | Wednesday 08 April 2026 00:45:49 +0000 (0:00:01.411) 0:01:43.976 ******* 2026-04-08 00:54:37.365619 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.365623 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.365627 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.365631 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.365634 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.365638 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.365642 | orchestrator | 2026-04-08 00:54:37.365645 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-08 00:54:37.365652 | orchestrator | Wednesday 08 April 2026 00:46:43 +0000 (0:00:54.164) 0:02:38.140 ******* 2026-04-08 00:54:37.365667 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-08 00:54:37.365675 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-08 00:54:37.365681 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-08 00:54:37.365687 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.365693 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-08 00:54:37.365699 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-08 00:54:37.365705 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-08 00:54:37.365712 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.365718 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-08 00:54:37.365725 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-08 00:54:37.365731 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-08 00:54:37.365738 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.365744 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-08 00:54:37.365751 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-08 00:54:37.365758 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-08 00:54:37.365762 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.365766 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-08 00:54:37.365770 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-08 00:54:37.365773 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-08 00:54:37.365777 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.365781 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-08 00:54:37.365789 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-08 00:54:37.365793 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-08 00:54:37.365796 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.365800 | orchestrator | 2026-04-08 00:54:37.365804 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-08 00:54:37.365811 | orchestrator | Wednesday 08 April 2026 00:46:44 +0000 (0:00:00.784) 0:02:38.925 ******* 2026-04-08 00:54:37.365815 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.365818 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.365822 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.365826 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.365830 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.365833 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.365837 | orchestrator | 2026-04-08 00:54:37.365841 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-08 00:54:37.365844 | orchestrator | Wednesday 08 April 2026 00:46:45 +0000 (0:00:00.999) 0:02:39.925 ******* 2026-04-08 00:54:37.365848 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.365852 | orchestrator | 2026-04-08 00:54:37.365856 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-08 00:54:37.365863 | orchestrator | Wednesday 08 April 2026 00:46:45 +0000 (0:00:00.145) 0:02:40.070 ******* 2026-04-08 00:54:37.365867 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.365871 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.365875 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.365878 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.365882 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.365886 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.365889 | orchestrator | 2026-04-08 00:54:37.365893 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-08 00:54:37.365897 | orchestrator | Wednesday 08 April 2026 00:46:46 +0000 (0:00:00.587) 0:02:40.658 ******* 2026-04-08 00:54:37.365901 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.365904 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.365908 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.365912 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.365915 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.365919 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.365923 | orchestrator | 2026-04-08 00:54:37.365927 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-08 00:54:37.365930 | orchestrator | Wednesday 08 April 2026 00:46:46 +0000 (0:00:00.734) 0:02:41.392 ******* 2026-04-08 00:54:37.365934 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.365938 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.365942 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.365945 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.365949 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.365953 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.365956 | orchestrator | 2026-04-08 00:54:37.365960 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-08 00:54:37.365964 | orchestrator | Wednesday 08 April 2026 00:46:47 +0000 (0:00:00.560) 0:02:41.953 ******* 2026-04-08 00:54:37.365968 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.365971 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.365975 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.365979 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.365982 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.365986 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.365990 | orchestrator | 2026-04-08 00:54:37.365994 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-08 00:54:37.365997 | orchestrator | Wednesday 08 April 2026 00:46:49 +0000 (0:00:01.921) 0:02:43.875 ******* 2026-04-08 00:54:37.366001 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.366005 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.366008 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.366012 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.366046 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.366049 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.366053 | orchestrator | 2026-04-08 00:54:37.366057 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-08 00:54:37.366061 | orchestrator | Wednesday 08 April 2026 00:46:49 +0000 (0:00:00.559) 0:02:44.434 ******* 2026-04-08 00:54:37.366065 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.366070 | orchestrator | 2026-04-08 00:54:37.366074 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-08 00:54:37.366078 | orchestrator | Wednesday 08 April 2026 00:46:51 +0000 (0:00:01.484) 0:02:45.918 ******* 2026-04-08 00:54:37.366081 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.366085 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.366089 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.366093 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.366096 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.366102 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.366106 | orchestrator | 2026-04-08 00:54:37.366110 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-08 00:54:37.366114 | orchestrator | Wednesday 08 April 2026 00:46:52 +0000 (0:00:00.984) 0:02:46.903 ******* 2026-04-08 00:54:37.366117 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.366121 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.366125 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.366128 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.366132 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.366135 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.366139 | orchestrator | 2026-04-08 00:54:37.366143 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-08 00:54:37.366146 | orchestrator | Wednesday 08 April 2026 00:46:53 +0000 (0:00:01.131) 0:02:48.035 ******* 2026-04-08 00:54:37.366150 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.366154 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.366157 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.366161 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.366165 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.366172 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.366176 | orchestrator | 2026-04-08 00:54:37.366179 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-08 00:54:37.366183 | orchestrator | Wednesday 08 April 2026 00:46:54 +0000 (0:00:00.572) 0:02:48.608 ******* 2026-04-08 00:54:37.366189 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.366193 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.366197 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.366201 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.366204 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.366208 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.366212 | orchestrator | 2026-04-08 00:54:37.366215 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-08 00:54:37.366219 | orchestrator | Wednesday 08 April 2026 00:46:54 +0000 (0:00:00.865) 0:02:49.474 ******* 2026-04-08 00:54:37.366223 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.366226 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.366230 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.366234 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.366238 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.366241 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.366245 | orchestrator | 2026-04-08 00:54:37.366249 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-08 00:54:37.366269 | orchestrator | Wednesday 08 April 2026 00:46:55 +0000 (0:00:00.594) 0:02:50.068 ******* 2026-04-08 00:54:37.366273 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.366277 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.366281 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.366285 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.366288 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.366292 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.366296 | orchestrator | 2026-04-08 00:54:37.366300 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-08 00:54:37.366303 | orchestrator | Wednesday 08 April 2026 00:46:56 +0000 (0:00:00.983) 0:02:51.051 ******* 2026-04-08 00:54:37.366307 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.366311 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.366315 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.366318 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.366322 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.366326 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.366330 | orchestrator | 2026-04-08 00:54:37.366334 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-08 00:54:37.366341 | orchestrator | Wednesday 08 April 2026 00:46:57 +0000 (0:00:00.569) 0:02:51.621 ******* 2026-04-08 00:54:37.366345 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.366348 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.366352 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.366356 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.366360 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.366364 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.366370 | orchestrator | 2026-04-08 00:54:37.366376 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-08 00:54:37.366383 | orchestrator | Wednesday 08 April 2026 00:46:57 +0000 (0:00:00.639) 0:02:52.261 ******* 2026-04-08 00:54:37.366390 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.366399 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.366405 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.366411 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.366417 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.366423 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.366429 | orchestrator | 2026-04-08 00:54:37.366435 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-08 00:54:37.366441 | orchestrator | Wednesday 08 April 2026 00:46:58 +0000 (0:00:01.153) 0:02:53.414 ******* 2026-04-08 00:54:37.366447 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.366453 | orchestrator | 2026-04-08 00:54:37.366459 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-08 00:54:37.366465 | orchestrator | Wednesday 08 April 2026 00:46:59 +0000 (0:00:01.033) 0:02:54.448 ******* 2026-04-08 00:54:37.366471 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-08 00:54:37.366477 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-08 00:54:37.366483 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-08 00:54:37.366489 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-08 00:54:37.366495 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-08 00:54:37.366501 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-08 00:54:37.366507 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-08 00:54:37.366513 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-08 00:54:37.366519 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-08 00:54:37.366526 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-08 00:54:37.366532 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-08 00:54:37.366538 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-08 00:54:37.366545 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-08 00:54:37.366551 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-08 00:54:37.366558 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-08 00:54:37.366565 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-08 00:54:37.366571 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-08 00:54:37.366576 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-08 00:54:37.366580 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-08 00:54:37.366584 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-08 00:54:37.366598 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-08 00:54:37.366603 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-08 00:54:37.366607 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-08 00:54:37.366614 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-08 00:54:37.366624 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-08 00:54:37.366630 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-08 00:54:37.366636 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-08 00:54:37.366642 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-08 00:54:37.366649 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-08 00:54:37.366655 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-08 00:54:37.366661 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-08 00:54:37.366668 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-08 00:54:37.366674 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-08 00:54:37.366680 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-08 00:54:37.366686 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-08 00:54:37.366692 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-08 00:54:37.366699 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-08 00:54:37.366705 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-08 00:54:37.366711 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-08 00:54:37.366718 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-08 00:54:37.366724 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-08 00:54:37.366730 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-08 00:54:37.366736 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-08 00:54:37.366742 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-08 00:54:37.366748 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-08 00:54:37.366755 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-08 00:54:37.366761 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-08 00:54:37.366768 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-08 00:54:37.366773 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-08 00:54:37.366780 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-08 00:54:37.366787 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-08 00:54:37.366793 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-08 00:54:37.366799 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-08 00:54:37.366806 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-08 00:54:37.366812 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-08 00:54:37.366818 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-08 00:54:37.366824 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-08 00:54:37.366830 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-08 00:54:37.366837 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-08 00:54:37.366843 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-08 00:54:37.366849 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-08 00:54:37.366856 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-08 00:54:37.366861 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-08 00:54:37.366869 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-08 00:54:37.366878 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-08 00:54:37.366884 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-08 00:54:37.366899 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-08 00:54:37.366906 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-08 00:54:37.366912 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-08 00:54:37.366918 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-08 00:54:37.366925 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-08 00:54:37.366931 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-08 00:54:37.366937 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-08 00:54:37.366944 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-08 00:54:37.366949 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-08 00:54:37.366953 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-08 00:54:37.366957 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-08 00:54:37.366960 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-08 00:54:37.366970 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-08 00:54:37.366974 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-08 00:54:37.366982 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-08 00:54:37.366986 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-08 00:54:37.366990 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-08 00:54:37.366997 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-08 00:54:37.367003 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-08 00:54:37.367009 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-08 00:54:37.367015 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-08 00:54:37.367021 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-08 00:54:37.367027 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-08 00:54:37.367034 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-08 00:54:37.367040 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-08 00:54:37.367046 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-08 00:54:37.367053 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-08 00:54:37.367059 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-08 00:54:37.367066 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-08 00:54:37.367071 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-08 00:54:37.367077 | orchestrator | 2026-04-08 00:54:37.367084 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-08 00:54:37.367090 | orchestrator | Wednesday 08 April 2026 00:47:06 +0000 (0:00:06.915) 0:03:01.363 ******* 2026-04-08 00:54:37.367096 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.367101 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.367107 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.367114 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.367121 | orchestrator | 2026-04-08 00:54:37.367126 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-08 00:54:37.367132 | orchestrator | Wednesday 08 April 2026 00:47:08 +0000 (0:00:01.271) 0:03:02.635 ******* 2026-04-08 00:54:37.367138 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-08 00:54:37.367149 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-08 00:54:37.367154 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-08 00:54:37.367160 | orchestrator | 2026-04-08 00:54:37.367167 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-08 00:54:37.367172 | orchestrator | Wednesday 08 April 2026 00:47:08 +0000 (0:00:00.819) 0:03:03.454 ******* 2026-04-08 00:54:37.367178 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-08 00:54:37.367185 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-08 00:54:37.367191 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-08 00:54:37.367196 | orchestrator | 2026-04-08 00:54:37.367203 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-08 00:54:37.367209 | orchestrator | Wednesday 08 April 2026 00:47:10 +0000 (0:00:01.278) 0:03:04.732 ******* 2026-04-08 00:54:37.367215 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.367220 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.367226 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.367233 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.367239 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.367245 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.367300 | orchestrator | 2026-04-08 00:54:37.367309 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-08 00:54:37.367315 | orchestrator | Wednesday 08 April 2026 00:47:10 +0000 (0:00:00.634) 0:03:05.367 ******* 2026-04-08 00:54:37.367321 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.367326 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.367332 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.367338 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.367345 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.367351 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.367357 | orchestrator | 2026-04-08 00:54:37.367364 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-08 00:54:37.367370 | orchestrator | Wednesday 08 April 2026 00:47:11 +0000 (0:00:00.922) 0:03:06.290 ******* 2026-04-08 00:54:37.367376 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.367382 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.367389 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.367395 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.367401 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.367410 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.367417 | orchestrator | 2026-04-08 00:54:37.367423 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-08 00:54:37.367428 | orchestrator | Wednesday 08 April 2026 00:47:12 +0000 (0:00:00.704) 0:03:06.994 ******* 2026-04-08 00:54:37.367440 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.367444 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.367447 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.367451 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.367455 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.367466 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.367470 | orchestrator | 2026-04-08 00:54:37.367473 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-08 00:54:37.367477 | orchestrator | Wednesday 08 April 2026 00:47:13 +0000 (0:00:00.844) 0:03:07.838 ******* 2026-04-08 00:54:37.367481 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.367485 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.367488 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.367496 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.367500 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.367504 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.367508 | orchestrator | 2026-04-08 00:54:37.367511 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-08 00:54:37.367515 | orchestrator | Wednesday 08 April 2026 00:47:13 +0000 (0:00:00.600) 0:03:08.438 ******* 2026-04-08 00:54:37.367519 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.367523 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.367526 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.367530 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.367534 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.367538 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.367541 | orchestrator | 2026-04-08 00:54:37.367545 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-08 00:54:37.367549 | orchestrator | Wednesday 08 April 2026 00:47:14 +0000 (0:00:00.634) 0:03:09.073 ******* 2026-04-08 00:54:37.367552 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.367559 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.367568 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.367575 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.367580 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.367587 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.367592 | orchestrator | 2026-04-08 00:54:37.367598 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-08 00:54:37.367605 | orchestrator | Wednesday 08 April 2026 00:47:15 +0000 (0:00:00.850) 0:03:09.923 ******* 2026-04-08 00:54:37.367611 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.367617 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.367623 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.367630 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.367636 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.367643 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.367649 | orchestrator | 2026-04-08 00:54:37.367656 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-08 00:54:37.367660 | orchestrator | Wednesday 08 April 2026 00:47:16 +0000 (0:00:00.691) 0:03:10.614 ******* 2026-04-08 00:54:37.367664 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.367668 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.367671 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.367675 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.367679 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.367683 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.367687 | orchestrator | 2026-04-08 00:54:37.367690 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-08 00:54:37.367694 | orchestrator | Wednesday 08 April 2026 00:47:18 +0000 (0:00:02.422) 0:03:13.037 ******* 2026-04-08 00:54:37.367698 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.367701 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.367705 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.367709 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.367713 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.367716 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.367720 | orchestrator | 2026-04-08 00:54:37.367724 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-08 00:54:37.367730 | orchestrator | Wednesday 08 April 2026 00:47:19 +0000 (0:00:00.698) 0:03:13.735 ******* 2026-04-08 00:54:37.367736 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.367742 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.367747 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.367753 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.367759 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.367770 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.367776 | orchestrator | 2026-04-08 00:54:37.367783 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-08 00:54:37.367789 | orchestrator | Wednesday 08 April 2026 00:47:20 +0000 (0:00:01.100) 0:03:14.836 ******* 2026-04-08 00:54:37.367795 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.367802 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.367808 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.367815 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.367820 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.367827 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.367833 | orchestrator | 2026-04-08 00:54:37.367840 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-08 00:54:37.367847 | orchestrator | Wednesday 08 April 2026 00:47:21 +0000 (0:00:01.030) 0:03:15.866 ******* 2026-04-08 00:54:37.367852 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.367859 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.367863 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.367867 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-08 00:54:37.367872 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-08 00:54:37.367879 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-08 00:54:37.367885 | orchestrator | 2026-04-08 00:54:37.367897 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-08 00:54:37.367906 | orchestrator | Wednesday 08 April 2026 00:47:22 +0000 (0:00:01.001) 0:03:16.867 ******* 2026-04-08 00:54:37.367914 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.367922 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.367929 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.367938 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-08 00:54:37.367946 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-08 00:54:37.367953 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.367960 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-08 00:54:37.367967 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-08 00:54:37.367973 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.367979 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-08 00:54:37.367985 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-08 00:54:37.367997 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.368004 | orchestrator | 2026-04-08 00:54:37.368011 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-08 00:54:37.368017 | orchestrator | Wednesday 08 April 2026 00:47:23 +0000 (0:00:00.792) 0:03:17.660 ******* 2026-04-08 00:54:37.368024 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.368030 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.368037 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.368043 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.368049 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.368055 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.368062 | orchestrator | 2026-04-08 00:54:37.368069 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-08 00:54:37.368076 | orchestrator | Wednesday 08 April 2026 00:47:24 +0000 (0:00:01.094) 0:03:18.755 ******* 2026-04-08 00:54:37.368083 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.368090 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.368097 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.368104 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.368111 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.368118 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.368125 | orchestrator | 2026-04-08 00:54:37.368132 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-08 00:54:37.368138 | orchestrator | Wednesday 08 April 2026 00:47:24 +0000 (0:00:00.776) 0:03:19.531 ******* 2026-04-08 00:54:37.368145 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.368152 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.368158 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.368164 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.368171 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.368178 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.368184 | orchestrator | 2026-04-08 00:54:37.368191 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-08 00:54:37.368198 | orchestrator | Wednesday 08 April 2026 00:47:26 +0000 (0:00:01.261) 0:03:20.792 ******* 2026-04-08 00:54:37.368205 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.368212 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.368219 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.368226 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.368232 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.368239 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.368246 | orchestrator | 2026-04-08 00:54:37.368265 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-08 00:54:37.368296 | orchestrator | Wednesday 08 April 2026 00:47:26 +0000 (0:00:00.790) 0:03:21.583 ******* 2026-04-08 00:54:37.368304 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.368317 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.368324 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.368330 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.368336 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.368343 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.368349 | orchestrator | 2026-04-08 00:54:37.368358 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-08 00:54:37.368365 | orchestrator | Wednesday 08 April 2026 00:47:28 +0000 (0:00:01.170) 0:03:22.754 ******* 2026-04-08 00:54:37.368371 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.368377 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.368384 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.368390 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.368401 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.368408 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.368414 | orchestrator | 2026-04-08 00:54:37.368420 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-08 00:54:37.368426 | orchestrator | Wednesday 08 April 2026 00:47:29 +0000 (0:00:01.039) 0:03:23.793 ******* 2026-04-08 00:54:37.368432 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-08 00:54:37.368439 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-08 00:54:37.368445 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-08 00:54:37.368451 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.368457 | orchestrator | 2026-04-08 00:54:37.368463 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-08 00:54:37.368469 | orchestrator | Wednesday 08 April 2026 00:47:29 +0000 (0:00:00.724) 0:03:24.518 ******* 2026-04-08 00:54:37.368475 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-08 00:54:37.368481 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-08 00:54:37.368488 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-08 00:54:37.368494 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.368500 | orchestrator | 2026-04-08 00:54:37.368506 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-08 00:54:37.368512 | orchestrator | Wednesday 08 April 2026 00:47:31 +0000 (0:00:01.081) 0:03:25.599 ******* 2026-04-08 00:54:37.368519 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-08 00:54:37.368525 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-08 00:54:37.368531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-08 00:54:37.368538 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.368545 | orchestrator | 2026-04-08 00:54:37.368551 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-08 00:54:37.368557 | orchestrator | Wednesday 08 April 2026 00:47:31 +0000 (0:00:00.372) 0:03:25.972 ******* 2026-04-08 00:54:37.368563 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.368567 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.368571 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.368575 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.368578 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.368582 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.368586 | orchestrator | 2026-04-08 00:54:37.368589 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-08 00:54:37.368593 | orchestrator | Wednesday 08 April 2026 00:47:32 +0000 (0:00:00.735) 0:03:26.707 ******* 2026-04-08 00:54:37.368597 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-08 00:54:37.368601 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.368604 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-08 00:54:37.368608 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.368612 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-08 00:54:37.368615 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.368619 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-08 00:54:37.368623 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-08 00:54:37.368627 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-08 00:54:37.368630 | orchestrator | 2026-04-08 00:54:37.368634 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-08 00:54:37.368638 | orchestrator | Wednesday 08 April 2026 00:47:34 +0000 (0:00:02.411) 0:03:29.119 ******* 2026-04-08 00:54:37.368641 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.368645 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.368649 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.368652 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.368656 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.368660 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.368669 | orchestrator | 2026-04-08 00:54:37.368673 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-08 00:54:37.368677 | orchestrator | Wednesday 08 April 2026 00:47:37 +0000 (0:00:03.007) 0:03:32.127 ******* 2026-04-08 00:54:37.368680 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.368684 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.368688 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.368691 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.368695 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.368699 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.368702 | orchestrator | 2026-04-08 00:54:37.368706 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-08 00:54:37.368710 | orchestrator | Wednesday 08 April 2026 00:47:38 +0000 (0:00:01.153) 0:03:33.280 ******* 2026-04-08 00:54:37.368714 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.368717 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.368721 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.368725 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:37.368729 | orchestrator | 2026-04-08 00:54:37.368733 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-08 00:54:37.368736 | orchestrator | Wednesday 08 April 2026 00:47:39 +0000 (0:00:00.847) 0:03:34.127 ******* 2026-04-08 00:54:37.368740 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.368744 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.368747 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.368755 | orchestrator | 2026-04-08 00:54:37.368759 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-08 00:54:37.368763 | orchestrator | Wednesday 08 April 2026 00:47:39 +0000 (0:00:00.317) 0:03:34.445 ******* 2026-04-08 00:54:37.368766 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.368773 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.368776 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.368780 | orchestrator | 2026-04-08 00:54:37.368784 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-08 00:54:37.368788 | orchestrator | Wednesday 08 April 2026 00:47:41 +0000 (0:00:01.196) 0:03:35.641 ******* 2026-04-08 00:54:37.368791 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-08 00:54:37.368795 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-08 00:54:37.368799 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-08 00:54:37.368803 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.368806 | orchestrator | 2026-04-08 00:54:37.368810 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-08 00:54:37.368814 | orchestrator | Wednesday 08 April 2026 00:47:41 +0000 (0:00:00.888) 0:03:36.530 ******* 2026-04-08 00:54:37.368817 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.368821 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.368825 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.368828 | orchestrator | 2026-04-08 00:54:37.368832 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-08 00:54:37.368836 | orchestrator | Wednesday 08 April 2026 00:47:42 +0000 (0:00:00.504) 0:03:37.035 ******* 2026-04-08 00:54:37.368839 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.368843 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.368847 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.368851 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.368854 | orchestrator | 2026-04-08 00:54:37.368858 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-08 00:54:37.368862 | orchestrator | Wednesday 08 April 2026 00:47:43 +0000 (0:00:00.795) 0:03:37.830 ******* 2026-04-08 00:54:37.368865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:54:37.368872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:54:37.368875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:54:37.368879 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.368883 | orchestrator | 2026-04-08 00:54:37.368887 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-08 00:54:37.368890 | orchestrator | Wednesday 08 April 2026 00:47:43 +0000 (0:00:00.512) 0:03:38.343 ******* 2026-04-08 00:54:37.368894 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.368898 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.368901 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.368905 | orchestrator | 2026-04-08 00:54:37.368909 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-08 00:54:37.368912 | orchestrator | Wednesday 08 April 2026 00:47:44 +0000 (0:00:00.476) 0:03:38.819 ******* 2026-04-08 00:54:37.368916 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.368920 | orchestrator | 2026-04-08 00:54:37.368924 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-08 00:54:37.368927 | orchestrator | Wednesday 08 April 2026 00:47:44 +0000 (0:00:00.201) 0:03:39.021 ******* 2026-04-08 00:54:37.368931 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.368935 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.368938 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.368942 | orchestrator | 2026-04-08 00:54:37.368946 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-08 00:54:37.368949 | orchestrator | Wednesday 08 April 2026 00:47:44 +0000 (0:00:00.299) 0:03:39.321 ******* 2026-04-08 00:54:37.368953 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.368957 | orchestrator | 2026-04-08 00:54:37.368960 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-08 00:54:37.368964 | orchestrator | Wednesday 08 April 2026 00:47:44 +0000 (0:00:00.198) 0:03:39.519 ******* 2026-04-08 00:54:37.368968 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.368972 | orchestrator | 2026-04-08 00:54:37.368975 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-08 00:54:37.368979 | orchestrator | Wednesday 08 April 2026 00:47:45 +0000 (0:00:00.183) 0:03:39.703 ******* 2026-04-08 00:54:37.368983 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.368986 | orchestrator | 2026-04-08 00:54:37.368990 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-08 00:54:37.368994 | orchestrator | Wednesday 08 April 2026 00:47:45 +0000 (0:00:00.122) 0:03:39.826 ******* 2026-04-08 00:54:37.368997 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.369001 | orchestrator | 2026-04-08 00:54:37.369005 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-08 00:54:37.369008 | orchestrator | Wednesday 08 April 2026 00:47:45 +0000 (0:00:00.261) 0:03:40.088 ******* 2026-04-08 00:54:37.369012 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.369016 | orchestrator | 2026-04-08 00:54:37.369020 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-08 00:54:37.369023 | orchestrator | Wednesday 08 April 2026 00:47:45 +0000 (0:00:00.221) 0:03:40.309 ******* 2026-04-08 00:54:37.369027 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:54:37.369031 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:54:37.369034 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:54:37.369038 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.369042 | orchestrator | 2026-04-08 00:54:37.369045 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-08 00:54:37.369049 | orchestrator | Wednesday 08 April 2026 00:47:46 +0000 (0:00:00.540) 0:03:40.850 ******* 2026-04-08 00:54:37.369053 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.369059 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.369067 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.369070 | orchestrator | 2026-04-08 00:54:37.369074 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-08 00:54:37.369080 | orchestrator | Wednesday 08 April 2026 00:47:46 +0000 (0:00:00.447) 0:03:41.298 ******* 2026-04-08 00:54:37.369084 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.369088 | orchestrator | 2026-04-08 00:54:37.369091 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-08 00:54:37.369095 | orchestrator | Wednesday 08 April 2026 00:47:46 +0000 (0:00:00.226) 0:03:41.525 ******* 2026-04-08 00:54:37.369099 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.369102 | orchestrator | 2026-04-08 00:54:37.369106 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-08 00:54:37.369110 | orchestrator | Wednesday 08 April 2026 00:47:47 +0000 (0:00:00.194) 0:03:41.720 ******* 2026-04-08 00:54:37.369113 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.369117 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.369121 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.369125 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.369128 | orchestrator | 2026-04-08 00:54:37.369132 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-08 00:54:37.369136 | orchestrator | Wednesday 08 April 2026 00:47:48 +0000 (0:00:00.906) 0:03:42.626 ******* 2026-04-08 00:54:37.369140 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.369143 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.369147 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.369153 | orchestrator | 2026-04-08 00:54:37.369159 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-08 00:54:37.369169 | orchestrator | Wednesday 08 April 2026 00:47:48 +0000 (0:00:00.348) 0:03:42.975 ******* 2026-04-08 00:54:37.369176 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.369182 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.369187 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.369193 | orchestrator | 2026-04-08 00:54:37.369199 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-08 00:54:37.369205 | orchestrator | Wednesday 08 April 2026 00:47:49 +0000 (0:00:01.331) 0:03:44.307 ******* 2026-04-08 00:54:37.369211 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:54:37.369217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:54:37.369223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:54:37.369229 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.369235 | orchestrator | 2026-04-08 00:54:37.369241 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-08 00:54:37.369247 | orchestrator | Wednesday 08 April 2026 00:47:50 +0000 (0:00:00.749) 0:03:45.056 ******* 2026-04-08 00:54:37.369266 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.369273 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.369279 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.369285 | orchestrator | 2026-04-08 00:54:37.369292 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-08 00:54:37.369298 | orchestrator | Wednesday 08 April 2026 00:47:50 +0000 (0:00:00.313) 0:03:45.370 ******* 2026-04-08 00:54:37.369304 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.369311 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.369317 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.369323 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.369330 | orchestrator | 2026-04-08 00:54:37.369334 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-08 00:54:37.369338 | orchestrator | Wednesday 08 April 2026 00:47:51 +0000 (0:00:00.900) 0:03:46.270 ******* 2026-04-08 00:54:37.369343 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.369354 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.369360 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.369365 | orchestrator | 2026-04-08 00:54:37.369371 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-08 00:54:37.369376 | orchestrator | Wednesday 08 April 2026 00:47:51 +0000 (0:00:00.278) 0:03:46.549 ******* 2026-04-08 00:54:37.369381 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.369387 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.369392 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.369397 | orchestrator | 2026-04-08 00:54:37.369403 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-08 00:54:37.369411 | orchestrator | Wednesday 08 April 2026 00:47:53 +0000 (0:00:01.215) 0:03:47.765 ******* 2026-04-08 00:54:37.369417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:54:37.369423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:54:37.369429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:54:37.369435 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.369441 | orchestrator | 2026-04-08 00:54:37.369447 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-08 00:54:37.369453 | orchestrator | Wednesday 08 April 2026 00:47:53 +0000 (0:00:00.588) 0:03:48.354 ******* 2026-04-08 00:54:37.369459 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.369464 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.369470 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.369476 | orchestrator | 2026-04-08 00:54:37.369482 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-08 00:54:37.369487 | orchestrator | Wednesday 08 April 2026 00:47:54 +0000 (0:00:00.388) 0:03:48.742 ******* 2026-04-08 00:54:37.369493 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.369499 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.369505 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.369510 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.369516 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.369521 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.369527 | orchestrator | 2026-04-08 00:54:37.369539 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-08 00:54:37.369546 | orchestrator | Wednesday 08 April 2026 00:47:54 +0000 (0:00:00.561) 0:03:49.304 ******* 2026-04-08 00:54:37.369552 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.369558 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.369564 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.369574 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:37.369580 | orchestrator | 2026-04-08 00:54:37.369586 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-08 00:54:37.369592 | orchestrator | Wednesday 08 April 2026 00:47:55 +0000 (0:00:01.184) 0:03:50.488 ******* 2026-04-08 00:54:37.369597 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.369604 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.369609 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.369615 | orchestrator | 2026-04-08 00:54:37.369620 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-08 00:54:37.369626 | orchestrator | Wednesday 08 April 2026 00:47:56 +0000 (0:00:00.338) 0:03:50.827 ******* 2026-04-08 00:54:37.369631 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.369637 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.369643 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.369649 | orchestrator | 2026-04-08 00:54:37.369654 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-08 00:54:37.369660 | orchestrator | Wednesday 08 April 2026 00:47:57 +0000 (0:00:01.486) 0:03:52.314 ******* 2026-04-08 00:54:37.369666 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-08 00:54:37.369678 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-08 00:54:37.369684 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-08 00:54:37.369689 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.369695 | orchestrator | 2026-04-08 00:54:37.369700 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-08 00:54:37.369705 | orchestrator | Wednesday 08 April 2026 00:47:58 +0000 (0:00:00.638) 0:03:52.953 ******* 2026-04-08 00:54:37.369711 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.369717 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.369723 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.369729 | orchestrator | 2026-04-08 00:54:37.369735 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-08 00:54:37.369741 | orchestrator | 2026-04-08 00:54:37.369746 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-08 00:54:37.369751 | orchestrator | Wednesday 08 April 2026 00:47:58 +0000 (0:00:00.567) 0:03:53.521 ******* 2026-04-08 00:54:37.369758 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:37.369764 | orchestrator | 2026-04-08 00:54:37.369770 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-08 00:54:37.369775 | orchestrator | Wednesday 08 April 2026 00:47:59 +0000 (0:00:00.814) 0:03:54.335 ******* 2026-04-08 00:54:37.369782 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:37.369788 | orchestrator | 2026-04-08 00:54:37.369793 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-08 00:54:37.369799 | orchestrator | Wednesday 08 April 2026 00:48:00 +0000 (0:00:00.535) 0:03:54.871 ******* 2026-04-08 00:54:37.369805 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.369811 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.369818 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.369824 | orchestrator | 2026-04-08 00:54:37.369830 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-08 00:54:37.369837 | orchestrator | Wednesday 08 April 2026 00:48:00 +0000 (0:00:00.668) 0:03:55.540 ******* 2026-04-08 00:54:37.369843 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.369849 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.369855 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.369861 | orchestrator | 2026-04-08 00:54:37.369867 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-08 00:54:37.369873 | orchestrator | Wednesday 08 April 2026 00:48:01 +0000 (0:00:00.669) 0:03:56.209 ******* 2026-04-08 00:54:37.369879 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.369885 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.369892 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.369898 | orchestrator | 2026-04-08 00:54:37.369905 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-08 00:54:37.369909 | orchestrator | Wednesday 08 April 2026 00:48:01 +0000 (0:00:00.335) 0:03:56.545 ******* 2026-04-08 00:54:37.369913 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.369916 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.369920 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.369924 | orchestrator | 2026-04-08 00:54:37.369928 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-08 00:54:37.369931 | orchestrator | Wednesday 08 April 2026 00:48:02 +0000 (0:00:00.331) 0:03:56.877 ******* 2026-04-08 00:54:37.369935 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.369939 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.369942 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.369946 | orchestrator | 2026-04-08 00:54:37.369950 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-08 00:54:37.369954 | orchestrator | Wednesday 08 April 2026 00:48:03 +0000 (0:00:00.815) 0:03:57.692 ******* 2026-04-08 00:54:37.369966 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.369970 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.369973 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.369977 | orchestrator | 2026-04-08 00:54:37.369981 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-08 00:54:37.369985 | orchestrator | Wednesday 08 April 2026 00:48:03 +0000 (0:00:00.649) 0:03:58.341 ******* 2026-04-08 00:54:37.369989 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.369992 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.369996 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.370000 | orchestrator | 2026-04-08 00:54:37.370009 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-08 00:54:37.370042 | orchestrator | Wednesday 08 April 2026 00:48:04 +0000 (0:00:00.326) 0:03:58.668 ******* 2026-04-08 00:54:37.370046 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.370053 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.370057 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.370061 | orchestrator | 2026-04-08 00:54:37.370064 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-08 00:54:37.370068 | orchestrator | Wednesday 08 April 2026 00:48:04 +0000 (0:00:00.786) 0:03:59.454 ******* 2026-04-08 00:54:37.370072 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.370076 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.370079 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.370083 | orchestrator | 2026-04-08 00:54:37.370087 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-08 00:54:37.370091 | orchestrator | Wednesday 08 April 2026 00:48:05 +0000 (0:00:00.887) 0:04:00.342 ******* 2026-04-08 00:54:37.370094 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.370098 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.370102 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.370105 | orchestrator | 2026-04-08 00:54:37.370109 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-08 00:54:37.370113 | orchestrator | Wednesday 08 April 2026 00:48:06 +0000 (0:00:00.658) 0:04:01.001 ******* 2026-04-08 00:54:37.370117 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.370120 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.370124 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.370128 | orchestrator | 2026-04-08 00:54:37.370131 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-08 00:54:37.370135 | orchestrator | Wednesday 08 April 2026 00:48:06 +0000 (0:00:00.415) 0:04:01.417 ******* 2026-04-08 00:54:37.370139 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.370142 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.370146 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.370150 | orchestrator | 2026-04-08 00:54:37.370153 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-08 00:54:37.370157 | orchestrator | Wednesday 08 April 2026 00:48:07 +0000 (0:00:00.338) 0:04:01.756 ******* 2026-04-08 00:54:37.370161 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.370164 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.370168 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.370172 | orchestrator | 2026-04-08 00:54:37.370175 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-08 00:54:37.370179 | orchestrator | Wednesday 08 April 2026 00:48:07 +0000 (0:00:00.348) 0:04:02.104 ******* 2026-04-08 00:54:37.370183 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.370186 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.370190 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.370194 | orchestrator | 2026-04-08 00:54:37.370197 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-08 00:54:37.370201 | orchestrator | Wednesday 08 April 2026 00:48:07 +0000 (0:00:00.302) 0:04:02.406 ******* 2026-04-08 00:54:37.370205 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.370211 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.370215 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.370219 | orchestrator | 2026-04-08 00:54:37.370222 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-08 00:54:37.370226 | orchestrator | Wednesday 08 April 2026 00:48:08 +0000 (0:00:00.496) 0:04:02.903 ******* 2026-04-08 00:54:37.370230 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.370233 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.370237 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.370241 | orchestrator | 2026-04-08 00:54:37.370244 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-08 00:54:37.370248 | orchestrator | Wednesday 08 April 2026 00:48:08 +0000 (0:00:00.298) 0:04:03.202 ******* 2026-04-08 00:54:37.370268 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.370274 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.370279 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.370283 | orchestrator | 2026-04-08 00:54:37.370287 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-08 00:54:37.370291 | orchestrator | Wednesday 08 April 2026 00:48:08 +0000 (0:00:00.355) 0:04:03.557 ******* 2026-04-08 00:54:37.370295 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.370298 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.370302 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.370306 | orchestrator | 2026-04-08 00:54:37.370310 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-08 00:54:37.370313 | orchestrator | Wednesday 08 April 2026 00:48:09 +0000 (0:00:00.280) 0:04:03.838 ******* 2026-04-08 00:54:37.370317 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.370321 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.370325 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.370328 | orchestrator | 2026-04-08 00:54:37.370332 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-08 00:54:37.370336 | orchestrator | Wednesday 08 April 2026 00:48:10 +0000 (0:00:00.826) 0:04:04.664 ******* 2026-04-08 00:54:37.370340 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.370343 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.370347 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.370351 | orchestrator | 2026-04-08 00:54:37.370355 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-08 00:54:37.370359 | orchestrator | Wednesday 08 April 2026 00:48:10 +0000 (0:00:00.364) 0:04:05.029 ******* 2026-04-08 00:54:37.370363 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:37.370367 | orchestrator | 2026-04-08 00:54:37.370371 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-08 00:54:37.370374 | orchestrator | Wednesday 08 April 2026 00:48:11 +0000 (0:00:00.745) 0:04:05.774 ******* 2026-04-08 00:54:37.370378 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.370382 | orchestrator | 2026-04-08 00:54:37.370386 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-08 00:54:37.370400 | orchestrator | Wednesday 08 April 2026 00:48:11 +0000 (0:00:00.179) 0:04:05.954 ******* 2026-04-08 00:54:37.370404 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-08 00:54:37.370408 | orchestrator | 2026-04-08 00:54:37.370412 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-08 00:54:37.370419 | orchestrator | Wednesday 08 April 2026 00:48:12 +0000 (0:00:01.092) 0:04:07.046 ******* 2026-04-08 00:54:37.370423 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.370427 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.370431 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.370435 | orchestrator | 2026-04-08 00:54:37.370438 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-08 00:54:37.370442 | orchestrator | Wednesday 08 April 2026 00:48:12 +0000 (0:00:00.515) 0:04:07.562 ******* 2026-04-08 00:54:37.370446 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.370453 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.370456 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.370460 | orchestrator | 2026-04-08 00:54:37.370464 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-08 00:54:37.370468 | orchestrator | Wednesday 08 April 2026 00:48:13 +0000 (0:00:00.390) 0:04:07.953 ******* 2026-04-08 00:54:37.370472 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.370475 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.370479 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.370483 | orchestrator | 2026-04-08 00:54:37.370487 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-08 00:54:37.370491 | orchestrator | Wednesday 08 April 2026 00:48:14 +0000 (0:00:01.392) 0:04:09.346 ******* 2026-04-08 00:54:37.370495 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.370502 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.370508 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.370515 | orchestrator | 2026-04-08 00:54:37.370521 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-08 00:54:37.370527 | orchestrator | Wednesday 08 April 2026 00:48:15 +0000 (0:00:01.103) 0:04:10.449 ******* 2026-04-08 00:54:37.370534 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.370539 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.370545 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.370550 | orchestrator | 2026-04-08 00:54:37.370556 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-08 00:54:37.370563 | orchestrator | Wednesday 08 April 2026 00:48:16 +0000 (0:00:00.717) 0:04:11.167 ******* 2026-04-08 00:54:37.370568 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.370574 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.370579 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.370585 | orchestrator | 2026-04-08 00:54:37.370590 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-08 00:54:37.370597 | orchestrator | Wednesday 08 April 2026 00:48:17 +0000 (0:00:00.695) 0:04:11.863 ******* 2026-04-08 00:54:37.370602 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.370607 | orchestrator | 2026-04-08 00:54:37.370613 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-08 00:54:37.370619 | orchestrator | Wednesday 08 April 2026 00:48:18 +0000 (0:00:01.146) 0:04:13.009 ******* 2026-04-08 00:54:37.370625 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.370630 | orchestrator | 2026-04-08 00:54:37.370636 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-08 00:54:37.370642 | orchestrator | Wednesday 08 April 2026 00:48:19 +0000 (0:00:00.608) 0:04:13.618 ******* 2026-04-08 00:54:37.370648 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-08 00:54:37.370654 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:37.370659 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:37.370665 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:54:37.370671 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-08 00:54:37.370678 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:54:37.370685 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:54:37.370691 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-08 00:54:37.370697 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:54:37.370703 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-08 00:54:37.370709 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-08 00:54:37.370715 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-08 00:54:37.370722 | orchestrator | 2026-04-08 00:54:37.370728 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-08 00:54:37.370735 | orchestrator | Wednesday 08 April 2026 00:48:22 +0000 (0:00:03.629) 0:04:17.248 ******* 2026-04-08 00:54:37.370748 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.370755 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.370762 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.370768 | orchestrator | 2026-04-08 00:54:37.370775 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-08 00:54:37.370781 | orchestrator | Wednesday 08 April 2026 00:48:24 +0000 (0:00:01.511) 0:04:18.759 ******* 2026-04-08 00:54:37.370788 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.370792 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.370796 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.370800 | orchestrator | 2026-04-08 00:54:37.370804 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-08 00:54:37.370808 | orchestrator | Wednesday 08 April 2026 00:48:24 +0000 (0:00:00.347) 0:04:19.107 ******* 2026-04-08 00:54:37.370812 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.370816 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.370820 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.370824 | orchestrator | 2026-04-08 00:54:37.370828 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-08 00:54:37.370832 | orchestrator | Wednesday 08 April 2026 00:48:25 +0000 (0:00:00.594) 0:04:19.702 ******* 2026-04-08 00:54:37.370836 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.370839 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.370843 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.370847 | orchestrator | 2026-04-08 00:54:37.370857 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-08 00:54:37.370861 | orchestrator | Wednesday 08 April 2026 00:48:27 +0000 (0:00:02.374) 0:04:22.076 ******* 2026-04-08 00:54:37.370865 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.370869 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.370876 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.370880 | orchestrator | 2026-04-08 00:54:37.370883 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-08 00:54:37.370888 | orchestrator | Wednesday 08 April 2026 00:48:28 +0000 (0:00:01.464) 0:04:23.541 ******* 2026-04-08 00:54:37.370893 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.370899 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.370905 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.370912 | orchestrator | 2026-04-08 00:54:37.370916 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-08 00:54:37.370920 | orchestrator | Wednesday 08 April 2026 00:48:29 +0000 (0:00:00.438) 0:04:23.979 ******* 2026-04-08 00:54:37.370924 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:37.370928 | orchestrator | 2026-04-08 00:54:37.370931 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-08 00:54:37.370935 | orchestrator | Wednesday 08 April 2026 00:48:30 +0000 (0:00:00.653) 0:04:24.633 ******* 2026-04-08 00:54:37.370939 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.370943 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.370946 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.370950 | orchestrator | 2026-04-08 00:54:37.370954 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-08 00:54:37.370958 | orchestrator | Wednesday 08 April 2026 00:48:30 +0000 (0:00:00.638) 0:04:25.271 ******* 2026-04-08 00:54:37.370961 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.370965 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.370969 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.370972 | orchestrator | 2026-04-08 00:54:37.370976 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-08 00:54:37.370980 | orchestrator | Wednesday 08 April 2026 00:48:31 +0000 (0:00:00.456) 0:04:25.728 ******* 2026-04-08 00:54:37.370984 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:37.370992 | orchestrator | 2026-04-08 00:54:37.370996 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-08 00:54:37.370999 | orchestrator | Wednesday 08 April 2026 00:48:31 +0000 (0:00:00.504) 0:04:26.232 ******* 2026-04-08 00:54:37.371004 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.371007 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.371011 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.371015 | orchestrator | 2026-04-08 00:54:37.371019 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-08 00:54:37.371022 | orchestrator | Wednesday 08 April 2026 00:48:34 +0000 (0:00:02.377) 0:04:28.610 ******* 2026-04-08 00:54:37.371026 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.371030 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.371034 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.371037 | orchestrator | 2026-04-08 00:54:37.371041 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-08 00:54:37.371045 | orchestrator | Wednesday 08 April 2026 00:48:35 +0000 (0:00:01.159) 0:04:29.770 ******* 2026-04-08 00:54:37.371049 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.371052 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.371056 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.371060 | orchestrator | 2026-04-08 00:54:37.371064 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-08 00:54:37.371071 | orchestrator | Wednesday 08 April 2026 00:48:37 +0000 (0:00:02.068) 0:04:31.839 ******* 2026-04-08 00:54:37.371077 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.371083 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.371087 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.371091 | orchestrator | 2026-04-08 00:54:37.371095 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-08 00:54:37.371098 | orchestrator | Wednesday 08 April 2026 00:48:39 +0000 (0:00:02.373) 0:04:34.213 ******* 2026-04-08 00:54:37.371102 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:37.371106 | orchestrator | 2026-04-08 00:54:37.371110 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-08 00:54:37.371113 | orchestrator | Wednesday 08 April 2026 00:48:40 +0000 (0:00:00.766) 0:04:34.980 ******* 2026-04-08 00:54:37.371117 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-08 00:54:37.371121 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.371125 | orchestrator | 2026-04-08 00:54:37.371128 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-08 00:54:37.371132 | orchestrator | Wednesday 08 April 2026 00:49:02 +0000 (0:00:21.689) 0:04:56.669 ******* 2026-04-08 00:54:37.371136 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.371140 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.371143 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.371147 | orchestrator | 2026-04-08 00:54:37.371151 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-08 00:54:37.371155 | orchestrator | Wednesday 08 April 2026 00:49:08 +0000 (0:00:06.486) 0:05:03.156 ******* 2026-04-08 00:54:37.371158 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.371162 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.371166 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.371169 | orchestrator | 2026-04-08 00:54:37.371173 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-08 00:54:37.371177 | orchestrator | Wednesday 08 April 2026 00:49:08 +0000 (0:00:00.314) 0:05:03.471 ******* 2026-04-08 00:54:37.371188 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__632037e3497c56b6f56e929016d438b8a86a670b'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-08 00:54:37.371197 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__632037e3497c56b6f56e929016d438b8a86a670b'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-08 00:54:37.371202 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__632037e3497c56b6f56e929016d438b8a86a670b'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-08 00:54:37.371207 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__632037e3497c56b6f56e929016d438b8a86a670b'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-08 00:54:37.371211 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__632037e3497c56b6f56e929016d438b8a86a670b'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-08 00:54:37.371215 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__632037e3497c56b6f56e929016d438b8a86a670b'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__632037e3497c56b6f56e929016d438b8a86a670b'}])  2026-04-08 00:54:37.371220 | orchestrator | 2026-04-08 00:54:37.371224 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-08 00:54:37.371228 | orchestrator | Wednesday 08 April 2026 00:49:19 +0000 (0:00:10.842) 0:05:14.313 ******* 2026-04-08 00:54:37.371232 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.371236 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.371240 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.371243 | orchestrator | 2026-04-08 00:54:37.371247 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-08 00:54:37.371265 | orchestrator | Wednesday 08 April 2026 00:49:20 +0000 (0:00:00.334) 0:05:14.648 ******* 2026-04-08 00:54:37.371272 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:37.371277 | orchestrator | 2026-04-08 00:54:37.371283 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-08 00:54:37.371288 | orchestrator | Wednesday 08 April 2026 00:49:20 +0000 (0:00:00.777) 0:05:15.425 ******* 2026-04-08 00:54:37.371294 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.371300 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.371307 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.371313 | orchestrator | 2026-04-08 00:54:37.371320 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-08 00:54:37.371326 | orchestrator | Wednesday 08 April 2026 00:49:21 +0000 (0:00:00.317) 0:05:15.743 ******* 2026-04-08 00:54:37.371332 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.371336 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.371340 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.371344 | orchestrator | 2026-04-08 00:54:37.371348 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-08 00:54:37.371355 | orchestrator | Wednesday 08 April 2026 00:49:21 +0000 (0:00:00.337) 0:05:16.080 ******* 2026-04-08 00:54:37.371359 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-08 00:54:37.371363 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-08 00:54:37.371366 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-08 00:54:37.371370 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.371374 | orchestrator | 2026-04-08 00:54:37.371377 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-08 00:54:37.371381 | orchestrator | Wednesday 08 April 2026 00:49:22 +0000 (0:00:00.852) 0:05:16.933 ******* 2026-04-08 00:54:37.371385 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.371388 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.371392 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.371396 | orchestrator | 2026-04-08 00:54:37.371404 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-08 00:54:37.371410 | orchestrator | 2026-04-08 00:54:37.371418 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-08 00:54:37.371428 | orchestrator | Wednesday 08 April 2026 00:49:23 +0000 (0:00:00.795) 0:05:17.729 ******* 2026-04-08 00:54:37.371437 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:37.371443 | orchestrator | 2026-04-08 00:54:37.371449 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-08 00:54:37.371455 | orchestrator | Wednesday 08 April 2026 00:49:23 +0000 (0:00:00.524) 0:05:18.253 ******* 2026-04-08 00:54:37.371461 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:37.371466 | orchestrator | 2026-04-08 00:54:37.371471 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-08 00:54:37.371477 | orchestrator | Wednesday 08 April 2026 00:49:24 +0000 (0:00:00.749) 0:05:19.003 ******* 2026-04-08 00:54:37.371484 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.371490 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.371496 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.371502 | orchestrator | 2026-04-08 00:54:37.371508 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-08 00:54:37.371515 | orchestrator | Wednesday 08 April 2026 00:49:25 +0000 (0:00:00.747) 0:05:19.750 ******* 2026-04-08 00:54:37.371520 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.371524 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.371528 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.371531 | orchestrator | 2026-04-08 00:54:37.371535 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-08 00:54:37.371539 | orchestrator | Wednesday 08 April 2026 00:49:25 +0000 (0:00:00.331) 0:05:20.081 ******* 2026-04-08 00:54:37.371545 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.371552 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.371558 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.371564 | orchestrator | 2026-04-08 00:54:37.371570 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-08 00:54:37.371577 | orchestrator | Wednesday 08 April 2026 00:49:25 +0000 (0:00:00.319) 0:05:20.400 ******* 2026-04-08 00:54:37.371583 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.371589 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.371596 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.371603 | orchestrator | 2026-04-08 00:54:37.371610 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-08 00:54:37.371616 | orchestrator | Wednesday 08 April 2026 00:49:26 +0000 (0:00:00.547) 0:05:20.948 ******* 2026-04-08 00:54:37.371621 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.371625 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.371629 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.371638 | orchestrator | 2026-04-08 00:54:37.371645 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-08 00:54:37.371651 | orchestrator | Wednesday 08 April 2026 00:49:27 +0000 (0:00:00.701) 0:05:21.650 ******* 2026-04-08 00:54:37.371657 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.371663 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.371668 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.371674 | orchestrator | 2026-04-08 00:54:37.371679 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-08 00:54:37.371684 | orchestrator | Wednesday 08 April 2026 00:49:27 +0000 (0:00:00.274) 0:05:21.924 ******* 2026-04-08 00:54:37.371689 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.371695 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.371700 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.371707 | orchestrator | 2026-04-08 00:54:37.371713 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-08 00:54:37.371719 | orchestrator | Wednesday 08 April 2026 00:49:27 +0000 (0:00:00.253) 0:05:22.178 ******* 2026-04-08 00:54:37.371725 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.371731 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.371737 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.371743 | orchestrator | 2026-04-08 00:54:37.371749 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-08 00:54:37.371754 | orchestrator | Wednesday 08 April 2026 00:49:28 +0000 (0:00:01.106) 0:05:23.284 ******* 2026-04-08 00:54:37.371760 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.371765 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.371771 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.371776 | orchestrator | 2026-04-08 00:54:37.371782 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-08 00:54:37.371787 | orchestrator | Wednesday 08 April 2026 00:49:29 +0000 (0:00:00.826) 0:05:24.110 ******* 2026-04-08 00:54:37.371793 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.371799 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.371805 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.371811 | orchestrator | 2026-04-08 00:54:37.371817 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-08 00:54:37.371823 | orchestrator | Wednesday 08 April 2026 00:49:29 +0000 (0:00:00.255) 0:05:24.366 ******* 2026-04-08 00:54:37.371829 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.371835 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.371842 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.371848 | orchestrator | 2026-04-08 00:54:37.371854 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-08 00:54:37.371861 | orchestrator | Wednesday 08 April 2026 00:49:30 +0000 (0:00:00.296) 0:05:24.663 ******* 2026-04-08 00:54:37.371868 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.371874 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.371880 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.371886 | orchestrator | 2026-04-08 00:54:37.371892 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-08 00:54:37.371899 | orchestrator | Wednesday 08 April 2026 00:49:30 +0000 (0:00:00.294) 0:05:24.957 ******* 2026-04-08 00:54:37.371905 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.371912 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.371936 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.371941 | orchestrator | 2026-04-08 00:54:37.371945 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-08 00:54:37.371949 | orchestrator | Wednesday 08 April 2026 00:49:30 +0000 (0:00:00.532) 0:05:25.490 ******* 2026-04-08 00:54:37.371956 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.371960 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.371964 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.371968 | orchestrator | 2026-04-08 00:54:37.371972 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-08 00:54:37.371979 | orchestrator | Wednesday 08 April 2026 00:49:31 +0000 (0:00:00.352) 0:05:25.843 ******* 2026-04-08 00:54:37.371983 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.371987 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.371990 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.371994 | orchestrator | 2026-04-08 00:54:37.371998 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-08 00:54:37.372001 | orchestrator | Wednesday 08 April 2026 00:49:31 +0000 (0:00:00.308) 0:05:26.151 ******* 2026-04-08 00:54:37.372005 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.372009 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.372012 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.372016 | orchestrator | 2026-04-08 00:54:37.372020 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-08 00:54:37.372023 | orchestrator | Wednesday 08 April 2026 00:49:31 +0000 (0:00:00.319) 0:05:26.471 ******* 2026-04-08 00:54:37.372027 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.372031 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.372034 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.372038 | orchestrator | 2026-04-08 00:54:37.372042 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-08 00:54:37.372046 | orchestrator | Wednesday 08 April 2026 00:49:32 +0000 (0:00:00.749) 0:05:27.221 ******* 2026-04-08 00:54:37.372049 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.372053 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.372057 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.372060 | orchestrator | 2026-04-08 00:54:37.372064 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-08 00:54:37.372068 | orchestrator | Wednesday 08 April 2026 00:49:33 +0000 (0:00:00.430) 0:05:27.651 ******* 2026-04-08 00:54:37.372071 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.372075 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.372079 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.372082 | orchestrator | 2026-04-08 00:54:37.372086 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-08 00:54:37.372090 | orchestrator | Wednesday 08 April 2026 00:49:33 +0000 (0:00:00.542) 0:05:28.194 ******* 2026-04-08 00:54:37.372094 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-08 00:54:37.372097 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:54:37.372101 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:54:37.372105 | orchestrator | 2026-04-08 00:54:37.372109 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-08 00:54:37.372112 | orchestrator | Wednesday 08 April 2026 00:49:34 +0000 (0:00:00.961) 0:05:29.155 ******* 2026-04-08 00:54:37.372116 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:37.372120 | orchestrator | 2026-04-08 00:54:37.372124 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-08 00:54:37.372127 | orchestrator | Wednesday 08 April 2026 00:49:35 +0000 (0:00:00.793) 0:05:29.949 ******* 2026-04-08 00:54:37.372131 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.372135 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.372138 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.372142 | orchestrator | 2026-04-08 00:54:37.372146 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-08 00:54:37.372150 | orchestrator | Wednesday 08 April 2026 00:49:36 +0000 (0:00:00.708) 0:05:30.657 ******* 2026-04-08 00:54:37.372153 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.372157 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.372161 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.372164 | orchestrator | 2026-04-08 00:54:37.372168 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-08 00:54:37.372177 | orchestrator | Wednesday 08 April 2026 00:49:36 +0000 (0:00:00.287) 0:05:30.944 ******* 2026-04-08 00:54:37.372181 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-08 00:54:37.372185 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-08 00:54:37.372189 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-08 00:54:37.372192 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-08 00:54:37.372196 | orchestrator | 2026-04-08 00:54:37.372200 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-08 00:54:37.372204 | orchestrator | Wednesday 08 April 2026 00:49:45 +0000 (0:00:09.021) 0:05:39.966 ******* 2026-04-08 00:54:37.372207 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.372211 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.372215 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.372218 | orchestrator | 2026-04-08 00:54:37.372222 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-08 00:54:37.372226 | orchestrator | Wednesday 08 April 2026 00:49:46 +0000 (0:00:00.638) 0:05:40.605 ******* 2026-04-08 00:54:37.372229 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-08 00:54:37.372233 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-08 00:54:37.372237 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-08 00:54:37.372240 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-08 00:54:37.372244 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:37.372248 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:37.372265 | orchestrator | 2026-04-08 00:54:37.372273 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-08 00:54:37.372277 | orchestrator | Wednesday 08 April 2026 00:49:47 +0000 (0:00:01.808) 0:05:42.413 ******* 2026-04-08 00:54:37.372281 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-08 00:54:37.372285 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-08 00:54:37.372292 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-08 00:54:37.372298 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-08 00:54:37.372304 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-08 00:54:37.372309 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-08 00:54:37.372315 | orchestrator | 2026-04-08 00:54:37.372321 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-08 00:54:37.372328 | orchestrator | Wednesday 08 April 2026 00:49:49 +0000 (0:00:01.228) 0:05:43.642 ******* 2026-04-08 00:54:37.372334 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.372341 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.372348 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.372352 | orchestrator | 2026-04-08 00:54:37.372356 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-08 00:54:37.372359 | orchestrator | Wednesday 08 April 2026 00:49:49 +0000 (0:00:00.667) 0:05:44.310 ******* 2026-04-08 00:54:37.372363 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.372367 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.372370 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.372374 | orchestrator | 2026-04-08 00:54:37.372378 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-08 00:54:37.372381 | orchestrator | Wednesday 08 April 2026 00:49:50 +0000 (0:00:00.425) 0:05:44.735 ******* 2026-04-08 00:54:37.372385 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.372389 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.372392 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.372396 | orchestrator | 2026-04-08 00:54:37.372400 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-08 00:54:37.372404 | orchestrator | Wednesday 08 April 2026 00:49:50 +0000 (0:00:00.321) 0:05:45.057 ******* 2026-04-08 00:54:37.372407 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:37.372416 | orchestrator | 2026-04-08 00:54:37.372419 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-08 00:54:37.372423 | orchestrator | Wednesday 08 April 2026 00:49:50 +0000 (0:00:00.438) 0:05:45.495 ******* 2026-04-08 00:54:37.372427 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.372430 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.372434 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.372438 | orchestrator | 2026-04-08 00:54:37.372441 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-08 00:54:37.372445 | orchestrator | Wednesday 08 April 2026 00:49:51 +0000 (0:00:00.435) 0:05:45.930 ******* 2026-04-08 00:54:37.372449 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.372452 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.372456 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.372460 | orchestrator | 2026-04-08 00:54:37.372464 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-08 00:54:37.372467 | orchestrator | Wednesday 08 April 2026 00:49:51 +0000 (0:00:00.279) 0:05:46.209 ******* 2026-04-08 00:54:37.372471 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:37.372475 | orchestrator | 2026-04-08 00:54:37.372478 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-08 00:54:37.372482 | orchestrator | Wednesday 08 April 2026 00:49:52 +0000 (0:00:00.513) 0:05:46.723 ******* 2026-04-08 00:54:37.372486 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.372490 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.372493 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.372497 | orchestrator | 2026-04-08 00:54:37.372501 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-08 00:54:37.372504 | orchestrator | Wednesday 08 April 2026 00:49:53 +0000 (0:00:01.553) 0:05:48.276 ******* 2026-04-08 00:54:37.372508 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.372513 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.372519 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.372525 | orchestrator | 2026-04-08 00:54:37.372534 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-08 00:54:37.372543 | orchestrator | Wednesday 08 April 2026 00:49:54 +0000 (0:00:01.163) 0:05:49.439 ******* 2026-04-08 00:54:37.372549 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.372554 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.372559 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.372565 | orchestrator | 2026-04-08 00:54:37.372570 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-08 00:54:37.372576 | orchestrator | Wednesday 08 April 2026 00:49:56 +0000 (0:00:01.891) 0:05:51.331 ******* 2026-04-08 00:54:37.372582 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.372587 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.372593 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.372599 | orchestrator | 2026-04-08 00:54:37.372604 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-08 00:54:37.372610 | orchestrator | Wednesday 08 April 2026 00:49:58 +0000 (0:00:02.084) 0:05:53.415 ******* 2026-04-08 00:54:37.372617 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.372623 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.372629 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-08 00:54:37.372636 | orchestrator | 2026-04-08 00:54:37.372642 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-08 00:54:37.372648 | orchestrator | Wednesday 08 April 2026 00:49:59 +0000 (0:00:00.639) 0:05:54.055 ******* 2026-04-08 00:54:37.372652 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-08 00:54:37.372664 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-08 00:54:37.372668 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:54:37.372672 | orchestrator | 2026-04-08 00:54:37.372675 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-08 00:54:37.372682 | orchestrator | Wednesday 08 April 2026 00:50:12 +0000 (0:00:13.037) 0:06:07.092 ******* 2026-04-08 00:54:37.372686 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:54:37.372689 | orchestrator | 2026-04-08 00:54:37.372693 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-08 00:54:37.372697 | orchestrator | Wednesday 08 April 2026 00:50:13 +0000 (0:00:01.336) 0:06:08.429 ******* 2026-04-08 00:54:37.372701 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.372704 | orchestrator | 2026-04-08 00:54:37.372708 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-08 00:54:37.372712 | orchestrator | Wednesday 08 April 2026 00:50:14 +0000 (0:00:00.352) 0:06:08.781 ******* 2026-04-08 00:54:37.372716 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.372721 | orchestrator | 2026-04-08 00:54:37.372729 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-08 00:54:37.372739 | orchestrator | Wednesday 08 April 2026 00:50:14 +0000 (0:00:00.152) 0:06:08.934 ******* 2026-04-08 00:54:37.372745 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-08 00:54:37.372750 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-08 00:54:37.372756 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-08 00:54:37.372762 | orchestrator | 2026-04-08 00:54:37.372767 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-08 00:54:37.372772 | orchestrator | Wednesday 08 April 2026 00:50:20 +0000 (0:00:06.136) 0:06:15.071 ******* 2026-04-08 00:54:37.372778 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-08 00:54:37.372783 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-08 00:54:37.372789 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-08 00:54:37.372794 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-08 00:54:37.372799 | orchestrator | 2026-04-08 00:54:37.372805 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-08 00:54:37.372810 | orchestrator | Wednesday 08 April 2026 00:50:25 +0000 (0:00:04.819) 0:06:19.891 ******* 2026-04-08 00:54:37.372816 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.372822 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.372828 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.372833 | orchestrator | 2026-04-08 00:54:37.372840 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-08 00:54:37.372845 | orchestrator | Wednesday 08 April 2026 00:50:25 +0000 (0:00:00.668) 0:06:20.559 ******* 2026-04-08 00:54:37.372851 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:37.372856 | orchestrator | 2026-04-08 00:54:37.372862 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-08 00:54:37.372867 | orchestrator | Wednesday 08 April 2026 00:50:26 +0000 (0:00:00.512) 0:06:21.072 ******* 2026-04-08 00:54:37.372873 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.372880 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.372885 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.372891 | orchestrator | 2026-04-08 00:54:37.372898 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-08 00:54:37.372904 | orchestrator | Wednesday 08 April 2026 00:50:27 +0000 (0:00:00.549) 0:06:21.621 ******* 2026-04-08 00:54:37.372910 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.372916 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.372928 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.372934 | orchestrator | 2026-04-08 00:54:37.372940 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-08 00:54:37.372946 | orchestrator | Wednesday 08 April 2026 00:50:28 +0000 (0:00:01.286) 0:06:22.908 ******* 2026-04-08 00:54:37.372951 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-08 00:54:37.372954 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-08 00:54:37.372958 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-08 00:54:37.372962 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.372966 | orchestrator | 2026-04-08 00:54:37.372969 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-08 00:54:37.372973 | orchestrator | Wednesday 08 April 2026 00:50:28 +0000 (0:00:00.640) 0:06:23.548 ******* 2026-04-08 00:54:37.372977 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.372980 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.372984 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.372988 | orchestrator | 2026-04-08 00:54:37.372992 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-08 00:54:37.372995 | orchestrator | 2026-04-08 00:54:37.372999 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-08 00:54:37.373003 | orchestrator | Wednesday 08 April 2026 00:50:29 +0000 (0:00:00.546) 0:06:24.095 ******* 2026-04-08 00:54:37.373007 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.373011 | orchestrator | 2026-04-08 00:54:37.373014 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-08 00:54:37.373018 | orchestrator | Wednesday 08 April 2026 00:50:30 +0000 (0:00:00.753) 0:06:24.849 ******* 2026-04-08 00:54:37.373022 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.373026 | orchestrator | 2026-04-08 00:54:37.373033 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-08 00:54:37.373037 | orchestrator | Wednesday 08 April 2026 00:50:30 +0000 (0:00:00.543) 0:06:25.393 ******* 2026-04-08 00:54:37.373041 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.373044 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.373051 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.373055 | orchestrator | 2026-04-08 00:54:37.373058 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-08 00:54:37.373062 | orchestrator | Wednesday 08 April 2026 00:50:31 +0000 (0:00:00.546) 0:06:25.940 ******* 2026-04-08 00:54:37.373066 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.373070 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.373073 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.373077 | orchestrator | 2026-04-08 00:54:37.373081 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-08 00:54:37.373084 | orchestrator | Wednesday 08 April 2026 00:50:32 +0000 (0:00:00.711) 0:06:26.651 ******* 2026-04-08 00:54:37.373088 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.373092 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.373096 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.373099 | orchestrator | 2026-04-08 00:54:37.373103 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-08 00:54:37.373107 | orchestrator | Wednesday 08 April 2026 00:50:32 +0000 (0:00:00.776) 0:06:27.428 ******* 2026-04-08 00:54:37.373111 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.373114 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.373118 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.373122 | orchestrator | 2026-04-08 00:54:37.373126 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-08 00:54:37.373129 | orchestrator | Wednesday 08 April 2026 00:50:33 +0000 (0:00:00.717) 0:06:28.146 ******* 2026-04-08 00:54:37.373136 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.373140 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.373143 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.373147 | orchestrator | 2026-04-08 00:54:37.373151 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-08 00:54:37.373155 | orchestrator | Wednesday 08 April 2026 00:50:34 +0000 (0:00:00.461) 0:06:28.607 ******* 2026-04-08 00:54:37.373158 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.373162 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.373166 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.373169 | orchestrator | 2026-04-08 00:54:37.373173 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-08 00:54:37.373177 | orchestrator | Wednesday 08 April 2026 00:50:34 +0000 (0:00:00.265) 0:06:28.873 ******* 2026-04-08 00:54:37.373180 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.373184 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.373188 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.373191 | orchestrator | 2026-04-08 00:54:37.373195 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-08 00:54:37.373199 | orchestrator | Wednesday 08 April 2026 00:50:34 +0000 (0:00:00.255) 0:06:29.129 ******* 2026-04-08 00:54:37.373203 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.373206 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.373210 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.373214 | orchestrator | 2026-04-08 00:54:37.373217 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-08 00:54:37.373221 | orchestrator | Wednesday 08 April 2026 00:50:35 +0000 (0:00:00.681) 0:06:29.811 ******* 2026-04-08 00:54:37.373225 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.373228 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.373232 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.373236 | orchestrator | 2026-04-08 00:54:37.373240 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-08 00:54:37.373243 | orchestrator | Wednesday 08 April 2026 00:50:36 +0000 (0:00:00.884) 0:06:30.695 ******* 2026-04-08 00:54:37.373247 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.373265 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.373272 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.373278 | orchestrator | 2026-04-08 00:54:37.373284 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-08 00:54:37.373289 | orchestrator | Wednesday 08 April 2026 00:50:36 +0000 (0:00:00.285) 0:06:30.980 ******* 2026-04-08 00:54:37.373295 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.373301 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.373307 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.373313 | orchestrator | 2026-04-08 00:54:37.373319 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-08 00:54:37.373325 | orchestrator | Wednesday 08 April 2026 00:50:36 +0000 (0:00:00.278) 0:06:31.259 ******* 2026-04-08 00:54:37.373331 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.373337 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.373343 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.373348 | orchestrator | 2026-04-08 00:54:37.373354 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-08 00:54:37.373360 | orchestrator | Wednesday 08 April 2026 00:50:36 +0000 (0:00:00.315) 0:06:31.575 ******* 2026-04-08 00:54:37.373365 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.373372 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.373378 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.373384 | orchestrator | 2026-04-08 00:54:37.373390 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-08 00:54:37.373396 | orchestrator | Wednesday 08 April 2026 00:50:37 +0000 (0:00:00.482) 0:06:32.058 ******* 2026-04-08 00:54:37.373400 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.373404 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.373411 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.373415 | orchestrator | 2026-04-08 00:54:37.373419 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-08 00:54:37.373422 | orchestrator | Wednesday 08 April 2026 00:50:37 +0000 (0:00:00.319) 0:06:32.377 ******* 2026-04-08 00:54:37.373426 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.373430 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.373433 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.373437 | orchestrator | 2026-04-08 00:54:37.373441 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-08 00:54:37.373448 | orchestrator | Wednesday 08 April 2026 00:50:38 +0000 (0:00:00.253) 0:06:32.631 ******* 2026-04-08 00:54:37.373451 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.373455 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.373459 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.373463 | orchestrator | 2026-04-08 00:54:37.373469 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-08 00:54:37.373472 | orchestrator | Wednesday 08 April 2026 00:50:38 +0000 (0:00:00.268) 0:06:32.899 ******* 2026-04-08 00:54:37.373476 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.373480 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.373484 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.373487 | orchestrator | 2026-04-08 00:54:37.373491 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-08 00:54:37.373495 | orchestrator | Wednesday 08 April 2026 00:50:38 +0000 (0:00:00.426) 0:06:33.326 ******* 2026-04-08 00:54:37.373498 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.373502 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.373506 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.373510 | orchestrator | 2026-04-08 00:54:37.373513 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-08 00:54:37.373517 | orchestrator | Wednesday 08 April 2026 00:50:39 +0000 (0:00:00.343) 0:06:33.669 ******* 2026-04-08 00:54:37.373521 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.373524 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.373528 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.373532 | orchestrator | 2026-04-08 00:54:37.373535 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-08 00:54:37.373539 | orchestrator | Wednesday 08 April 2026 00:50:39 +0000 (0:00:00.567) 0:06:34.237 ******* 2026-04-08 00:54:37.373543 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.373546 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.373550 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.373554 | orchestrator | 2026-04-08 00:54:37.373557 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-08 00:54:37.373561 | orchestrator | Wednesday 08 April 2026 00:50:40 +0000 (0:00:00.659) 0:06:34.897 ******* 2026-04-08 00:54:37.373565 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-08 00:54:37.373569 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:54:37.373572 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:54:37.373576 | orchestrator | 2026-04-08 00:54:37.373580 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-08 00:54:37.373583 | orchestrator | Wednesday 08 April 2026 00:50:40 +0000 (0:00:00.611) 0:06:35.508 ******* 2026-04-08 00:54:37.373587 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.373591 | orchestrator | 2026-04-08 00:54:37.373595 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-08 00:54:37.373598 | orchestrator | Wednesday 08 April 2026 00:50:41 +0000 (0:00:00.521) 0:06:36.029 ******* 2026-04-08 00:54:37.373602 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.373606 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.373612 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.373616 | orchestrator | 2026-04-08 00:54:37.373620 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-08 00:54:37.373623 | orchestrator | Wednesday 08 April 2026 00:50:41 +0000 (0:00:00.299) 0:06:36.329 ******* 2026-04-08 00:54:37.373627 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.373631 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.373635 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.373638 | orchestrator | 2026-04-08 00:54:37.373642 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-08 00:54:37.373646 | orchestrator | Wednesday 08 April 2026 00:50:42 +0000 (0:00:00.554) 0:06:36.883 ******* 2026-04-08 00:54:37.373649 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.373653 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.373657 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.373660 | orchestrator | 2026-04-08 00:54:37.373664 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-08 00:54:37.373668 | orchestrator | Wednesday 08 April 2026 00:50:42 +0000 (0:00:00.650) 0:06:37.534 ******* 2026-04-08 00:54:37.373672 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.373675 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.373679 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.373683 | orchestrator | 2026-04-08 00:54:37.373686 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-08 00:54:37.373690 | orchestrator | Wednesday 08 April 2026 00:50:43 +0000 (0:00:00.357) 0:06:37.891 ******* 2026-04-08 00:54:37.373694 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-08 00:54:37.373698 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-08 00:54:37.373702 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-08 00:54:37.373705 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-08 00:54:37.373709 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-08 00:54:37.373713 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-08 00:54:37.373716 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-08 00:54:37.373720 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-08 00:54:37.373724 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-08 00:54:37.373730 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-08 00:54:37.373734 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-08 00:54:37.373737 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-08 00:54:37.373743 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-08 00:54:37.373747 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-08 00:54:37.373751 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-08 00:54:37.373754 | orchestrator | 2026-04-08 00:54:37.373758 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-08 00:54:37.373762 | orchestrator | Wednesday 08 April 2026 00:50:47 +0000 (0:00:04.167) 0:06:42.059 ******* 2026-04-08 00:54:37.373766 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.373769 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.373773 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.373777 | orchestrator | 2026-04-08 00:54:37.373781 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-08 00:54:37.373787 | orchestrator | Wednesday 08 April 2026 00:50:48 +0000 (0:00:00.574) 0:06:42.633 ******* 2026-04-08 00:54:37.373790 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.373794 | orchestrator | 2026-04-08 00:54:37.373798 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-08 00:54:37.373802 | orchestrator | Wednesday 08 April 2026 00:50:48 +0000 (0:00:00.508) 0:06:43.142 ******* 2026-04-08 00:54:37.373805 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-08 00:54:37.373809 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-08 00:54:37.373813 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-08 00:54:37.373816 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-08 00:54:37.373820 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-08 00:54:37.373824 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-08 00:54:37.373828 | orchestrator | 2026-04-08 00:54:37.373831 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-08 00:54:37.373835 | orchestrator | Wednesday 08 April 2026 00:50:49 +0000 (0:00:01.060) 0:06:44.203 ******* 2026-04-08 00:54:37.373839 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:37.373843 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-08 00:54:37.373846 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-08 00:54:37.373850 | orchestrator | 2026-04-08 00:54:37.373854 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-08 00:54:37.373857 | orchestrator | Wednesday 08 April 2026 00:50:51 +0000 (0:00:02.000) 0:06:46.203 ******* 2026-04-08 00:54:37.373861 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-08 00:54:37.373865 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-08 00:54:37.373869 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.373872 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-08 00:54:37.373876 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-08 00:54:37.373880 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.373884 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-08 00:54:37.373887 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-08 00:54:37.373891 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.373895 | orchestrator | 2026-04-08 00:54:37.373898 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-08 00:54:37.373902 | orchestrator | Wednesday 08 April 2026 00:50:53 +0000 (0:00:01.585) 0:06:47.789 ******* 2026-04-08 00:54:37.373906 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:54:37.373910 | orchestrator | 2026-04-08 00:54:37.373913 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-08 00:54:37.373917 | orchestrator | Wednesday 08 April 2026 00:50:55 +0000 (0:00:02.122) 0:06:49.911 ******* 2026-04-08 00:54:37.373921 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.373926 | orchestrator | 2026-04-08 00:54:37.373932 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-08 00:54:37.373938 | orchestrator | Wednesday 08 April 2026 00:50:55 +0000 (0:00:00.513) 0:06:50.425 ******* 2026-04-08 00:54:37.373944 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d2a42094-2be0-50d9-ab62-bd2425088ba2', 'data_vg': 'ceph-d2a42094-2be0-50d9-ab62-bd2425088ba2'}) 2026-04-08 00:54:37.373951 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-31d7fbda-737c-5413-835b-7dea8c782162', 'data_vg': 'ceph-31d7fbda-737c-5413-835b-7dea8c782162'}) 2026-04-08 00:54:37.373957 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bf49c8a6-5f7f-52ec-8321-922f51127285', 'data_vg': 'ceph-bf49c8a6-5f7f-52ec-8321-922f51127285'}) 2026-04-08 00:54:37.373967 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ed835e4d-3c58-59bb-af9d-6d23bfbc2494', 'data_vg': 'ceph-ed835e4d-3c58-59bb-af9d-6d23bfbc2494'}) 2026-04-08 00:54:37.373973 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6d74f3d8-bff6-5917-9df4-f8420d533035', 'data_vg': 'ceph-6d74f3d8-bff6-5917-9df4-f8420d533035'}) 2026-04-08 00:54:37.373983 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-42db71c5-e51d-540c-8fbe-0cd4e432c3d3', 'data_vg': 'ceph-42db71c5-e51d-540c-8fbe-0cd4e432c3d3'}) 2026-04-08 00:54:37.373990 | orchestrator | 2026-04-08 00:54:37.373996 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-08 00:54:37.374006 | orchestrator | Wednesday 08 April 2026 00:51:34 +0000 (0:00:38.634) 0:07:29.059 ******* 2026-04-08 00:54:37.374100 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374107 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.374111 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.374114 | orchestrator | 2026-04-08 00:54:37.374118 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-08 00:54:37.374122 | orchestrator | Wednesday 08 April 2026 00:51:35 +0000 (0:00:00.589) 0:07:29.649 ******* 2026-04-08 00:54:37.374126 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.374129 | orchestrator | 2026-04-08 00:54:37.374133 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-08 00:54:37.374137 | orchestrator | Wednesday 08 April 2026 00:51:35 +0000 (0:00:00.522) 0:07:30.171 ******* 2026-04-08 00:54:37.374141 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.374144 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.374148 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.374152 | orchestrator | 2026-04-08 00:54:37.374156 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-08 00:54:37.374159 | orchestrator | Wednesday 08 April 2026 00:51:36 +0000 (0:00:00.688) 0:07:30.860 ******* 2026-04-08 00:54:37.374163 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.374167 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.374170 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.374174 | orchestrator | 2026-04-08 00:54:37.374178 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-08 00:54:37.374182 | orchestrator | Wednesday 08 April 2026 00:51:38 +0000 (0:00:01.769) 0:07:32.629 ******* 2026-04-08 00:54:37.374185 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.374189 | orchestrator | 2026-04-08 00:54:37.374193 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-08 00:54:37.374197 | orchestrator | Wednesday 08 April 2026 00:51:38 +0000 (0:00:00.491) 0:07:33.120 ******* 2026-04-08 00:54:37.374200 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.374204 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.374208 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.374212 | orchestrator | 2026-04-08 00:54:37.374215 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-08 00:54:37.374219 | orchestrator | Wednesday 08 April 2026 00:51:39 +0000 (0:00:01.246) 0:07:34.367 ******* 2026-04-08 00:54:37.374223 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.374226 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.374230 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.374234 | orchestrator | 2026-04-08 00:54:37.374237 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-08 00:54:37.374241 | orchestrator | Wednesday 08 April 2026 00:51:41 +0000 (0:00:01.510) 0:07:35.877 ******* 2026-04-08 00:54:37.374245 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.374249 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.374267 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.374272 | orchestrator | 2026-04-08 00:54:37.374281 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-08 00:54:37.374285 | orchestrator | Wednesday 08 April 2026 00:51:43 +0000 (0:00:01.836) 0:07:37.713 ******* 2026-04-08 00:54:37.374289 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374292 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.374296 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.374300 | orchestrator | 2026-04-08 00:54:37.374304 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-08 00:54:37.374309 | orchestrator | Wednesday 08 April 2026 00:51:43 +0000 (0:00:00.309) 0:07:38.023 ******* 2026-04-08 00:54:37.374315 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374321 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.374327 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.374333 | orchestrator | 2026-04-08 00:54:37.374338 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-08 00:54:37.374344 | orchestrator | Wednesday 08 April 2026 00:51:43 +0000 (0:00:00.304) 0:07:38.328 ******* 2026-04-08 00:54:37.374349 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-08 00:54:37.374355 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-08 00:54:37.374361 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-04-08 00:54:37.374367 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-08 00:54:37.374373 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-04-08 00:54:37.374379 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-04-08 00:54:37.374385 | orchestrator | 2026-04-08 00:54:37.374392 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-08 00:54:37.374398 | orchestrator | Wednesday 08 April 2026 00:51:45 +0000 (0:00:01.404) 0:07:39.732 ******* 2026-04-08 00:54:37.374405 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-08 00:54:37.374411 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-08 00:54:37.374418 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-08 00:54:37.374423 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-08 00:54:37.374429 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-08 00:54:37.374436 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-08 00:54:37.374441 | orchestrator | 2026-04-08 00:54:37.374447 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-08 00:54:37.374453 | orchestrator | Wednesday 08 April 2026 00:51:47 +0000 (0:00:02.212) 0:07:41.945 ******* 2026-04-08 00:54:37.374459 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-08 00:54:37.374465 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-08 00:54:37.374470 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-08 00:54:37.374476 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-08 00:54:37.374486 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-08 00:54:37.374491 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-08 00:54:37.374497 | orchestrator | 2026-04-08 00:54:37.374502 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-08 00:54:37.374513 | orchestrator | Wednesday 08 April 2026 00:51:51 +0000 (0:00:03.660) 0:07:45.605 ******* 2026-04-08 00:54:37.374518 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374524 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.374529 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:54:37.374535 | orchestrator | 2026-04-08 00:54:37.374541 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-08 00:54:37.374546 | orchestrator | Wednesday 08 April 2026 00:51:53 +0000 (0:00:02.395) 0:07:48.001 ******* 2026-04-08 00:54:37.374552 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374558 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.374564 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-08 00:54:37.374570 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:54:37.374581 | orchestrator | 2026-04-08 00:54:37.374587 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-08 00:54:37.374593 | orchestrator | Wednesday 08 April 2026 00:52:06 +0000 (0:00:12.775) 0:08:00.777 ******* 2026-04-08 00:54:37.374598 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374604 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.374609 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.374615 | orchestrator | 2026-04-08 00:54:37.374621 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-08 00:54:37.374626 | orchestrator | Wednesday 08 April 2026 00:52:06 +0000 (0:00:00.750) 0:08:01.528 ******* 2026-04-08 00:54:37.374632 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374638 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.374643 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.374649 | orchestrator | 2026-04-08 00:54:37.374655 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-08 00:54:37.374660 | orchestrator | Wednesday 08 April 2026 00:52:07 +0000 (0:00:00.449) 0:08:01.978 ******* 2026-04-08 00:54:37.374665 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.374671 | orchestrator | 2026-04-08 00:54:37.374676 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-08 00:54:37.374682 | orchestrator | Wednesday 08 April 2026 00:52:07 +0000 (0:00:00.483) 0:08:02.461 ******* 2026-04-08 00:54:37.374687 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:54:37.374693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:54:37.374699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:54:37.374704 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374710 | orchestrator | 2026-04-08 00:54:37.374715 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-08 00:54:37.374721 | orchestrator | Wednesday 08 April 2026 00:52:08 +0000 (0:00:00.348) 0:08:02.809 ******* 2026-04-08 00:54:37.374726 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374732 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.374738 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.374744 | orchestrator | 2026-04-08 00:54:37.374750 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-08 00:54:37.374756 | orchestrator | Wednesday 08 April 2026 00:52:08 +0000 (0:00:00.411) 0:08:03.221 ******* 2026-04-08 00:54:37.374761 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374767 | orchestrator | 2026-04-08 00:54:37.374774 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-08 00:54:37.374780 | orchestrator | Wednesday 08 April 2026 00:52:08 +0000 (0:00:00.201) 0:08:03.423 ******* 2026-04-08 00:54:37.374785 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374792 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.374798 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.374804 | orchestrator | 2026-04-08 00:54:37.374809 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-08 00:54:37.374816 | orchestrator | Wednesday 08 April 2026 00:52:09 +0000 (0:00:00.319) 0:08:03.742 ******* 2026-04-08 00:54:37.374822 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374828 | orchestrator | 2026-04-08 00:54:37.374834 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-08 00:54:37.374840 | orchestrator | Wednesday 08 April 2026 00:52:09 +0000 (0:00:00.247) 0:08:03.989 ******* 2026-04-08 00:54:37.374846 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374852 | orchestrator | 2026-04-08 00:54:37.374858 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-08 00:54:37.374863 | orchestrator | Wednesday 08 April 2026 00:52:09 +0000 (0:00:00.212) 0:08:04.201 ******* 2026-04-08 00:54:37.374869 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374874 | orchestrator | 2026-04-08 00:54:37.374886 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-08 00:54:37.374893 | orchestrator | Wednesday 08 April 2026 00:52:09 +0000 (0:00:00.116) 0:08:04.318 ******* 2026-04-08 00:54:37.374899 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374904 | orchestrator | 2026-04-08 00:54:37.374910 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-08 00:54:37.374917 | orchestrator | Wednesday 08 April 2026 00:52:09 +0000 (0:00:00.226) 0:08:04.544 ******* 2026-04-08 00:54:37.374923 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374928 | orchestrator | 2026-04-08 00:54:37.374934 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-08 00:54:37.374940 | orchestrator | Wednesday 08 April 2026 00:52:10 +0000 (0:00:00.204) 0:08:04.748 ******* 2026-04-08 00:54:37.374946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:54:37.374952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:54:37.374966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:54:37.374973 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.374979 | orchestrator | 2026-04-08 00:54:37.374985 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-08 00:54:37.374995 | orchestrator | Wednesday 08 April 2026 00:52:10 +0000 (0:00:00.703) 0:08:05.451 ******* 2026-04-08 00:54:37.375002 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.375009 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.375015 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.375021 | orchestrator | 2026-04-08 00:54:37.375027 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-08 00:54:37.375033 | orchestrator | Wednesday 08 April 2026 00:52:11 +0000 (0:00:00.592) 0:08:06.044 ******* 2026-04-08 00:54:37.375039 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.375044 | orchestrator | 2026-04-08 00:54:37.375050 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-08 00:54:37.375056 | orchestrator | Wednesday 08 April 2026 00:52:11 +0000 (0:00:00.240) 0:08:06.285 ******* 2026-04-08 00:54:37.375062 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.375068 | orchestrator | 2026-04-08 00:54:37.375073 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-08 00:54:37.375079 | orchestrator | 2026-04-08 00:54:37.375085 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-08 00:54:37.375091 | orchestrator | Wednesday 08 April 2026 00:52:12 +0000 (0:00:00.645) 0:08:06.930 ******* 2026-04-08 00:54:37.375097 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.375103 | orchestrator | 2026-04-08 00:54:37.375108 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-08 00:54:37.375114 | orchestrator | Wednesday 08 April 2026 00:52:13 +0000 (0:00:01.200) 0:08:08.131 ******* 2026-04-08 00:54:37.375120 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.375125 | orchestrator | 2026-04-08 00:54:37.375131 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-08 00:54:37.375136 | orchestrator | Wednesday 08 April 2026 00:52:14 +0000 (0:00:01.341) 0:08:09.472 ******* 2026-04-08 00:54:37.375142 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.375148 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.375153 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.375159 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.375164 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.375170 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.375175 | orchestrator | 2026-04-08 00:54:37.375181 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-08 00:54:37.375194 | orchestrator | Wednesday 08 April 2026 00:52:15 +0000 (0:00:00.969) 0:08:10.442 ******* 2026-04-08 00:54:37.375201 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.375208 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.375214 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.375220 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.375225 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.375231 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.375237 | orchestrator | 2026-04-08 00:54:37.375242 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-08 00:54:37.375248 | orchestrator | Wednesday 08 April 2026 00:52:16 +0000 (0:00:00.954) 0:08:11.396 ******* 2026-04-08 00:54:37.375266 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.375273 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.375279 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.375285 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.375291 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.375297 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.375304 | orchestrator | 2026-04-08 00:54:37.375310 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-08 00:54:37.375315 | orchestrator | Wednesday 08 April 2026 00:52:17 +0000 (0:00:01.089) 0:08:12.486 ******* 2026-04-08 00:54:37.375321 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.375327 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.375333 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.375338 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.375343 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.375349 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.375355 | orchestrator | 2026-04-08 00:54:37.375360 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-08 00:54:37.375365 | orchestrator | Wednesday 08 April 2026 00:52:18 +0000 (0:00:00.947) 0:08:13.433 ******* 2026-04-08 00:54:37.375371 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.375377 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.375383 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.375389 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.375395 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.375401 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.375407 | orchestrator | 2026-04-08 00:54:37.375412 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-08 00:54:37.375418 | orchestrator | Wednesday 08 April 2026 00:52:19 +0000 (0:00:00.886) 0:08:14.320 ******* 2026-04-08 00:54:37.375423 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.375429 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.375435 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.375441 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.375447 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.375453 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.375459 | orchestrator | 2026-04-08 00:54:37.375464 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-08 00:54:37.375471 | orchestrator | Wednesday 08 April 2026 00:52:20 +0000 (0:00:00.591) 0:08:14.912 ******* 2026-04-08 00:54:37.375478 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.375484 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.375490 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.375505 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.375512 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.375518 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.375524 | orchestrator | 2026-04-08 00:54:37.375530 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-08 00:54:37.375541 | orchestrator | Wednesday 08 April 2026 00:52:21 +0000 (0:00:00.816) 0:08:15.729 ******* 2026-04-08 00:54:37.375547 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.375553 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.375564 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.375570 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.375576 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.375583 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.375589 | orchestrator | 2026-04-08 00:54:37.375595 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-08 00:54:37.375601 | orchestrator | Wednesday 08 April 2026 00:52:22 +0000 (0:00:01.055) 0:08:16.784 ******* 2026-04-08 00:54:37.375607 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.375613 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.375619 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.375624 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.375630 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.375636 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.375642 | orchestrator | 2026-04-08 00:54:37.375648 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-08 00:54:37.375654 | orchestrator | Wednesday 08 April 2026 00:52:23 +0000 (0:00:01.410) 0:08:18.195 ******* 2026-04-08 00:54:37.375661 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.375667 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.375674 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.375680 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.375686 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.375692 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.375699 | orchestrator | 2026-04-08 00:54:37.375705 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-08 00:54:37.375711 | orchestrator | Wednesday 08 April 2026 00:52:24 +0000 (0:00:00.593) 0:08:18.788 ******* 2026-04-08 00:54:37.375717 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.375724 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.375730 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.375736 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.375742 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.375749 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.375755 | orchestrator | 2026-04-08 00:54:37.375761 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-08 00:54:37.375767 | orchestrator | Wednesday 08 April 2026 00:52:24 +0000 (0:00:00.788) 0:08:19.577 ******* 2026-04-08 00:54:37.375774 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.375780 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.375786 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.375792 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.375798 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.375805 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.375811 | orchestrator | 2026-04-08 00:54:37.375817 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-08 00:54:37.375823 | orchestrator | Wednesday 08 April 2026 00:52:25 +0000 (0:00:00.735) 0:08:20.312 ******* 2026-04-08 00:54:37.375829 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.375835 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.375842 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.375848 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.375854 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.375860 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.375866 | orchestrator | 2026-04-08 00:54:37.375872 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-08 00:54:37.375878 | orchestrator | Wednesday 08 April 2026 00:52:26 +0000 (0:00:00.584) 0:08:20.896 ******* 2026-04-08 00:54:37.375885 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.375891 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.375897 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.375903 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.375909 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.375915 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.375921 | orchestrator | 2026-04-08 00:54:37.375927 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-08 00:54:37.375941 | orchestrator | Wednesday 08 April 2026 00:52:27 +0000 (0:00:00.858) 0:08:21.755 ******* 2026-04-08 00:54:37.375947 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.375954 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.375961 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.375967 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.375973 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.375979 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.375986 | orchestrator | 2026-04-08 00:54:37.375990 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-08 00:54:37.375994 | orchestrator | Wednesday 08 April 2026 00:52:27 +0000 (0:00:00.535) 0:08:22.291 ******* 2026-04-08 00:54:37.375997 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:54:37.376001 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:54:37.376005 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:54:37.376008 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.376012 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.376015 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.376019 | orchestrator | 2026-04-08 00:54:37.376023 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-08 00:54:37.376027 | orchestrator | Wednesday 08 April 2026 00:52:28 +0000 (0:00:00.656) 0:08:22.947 ******* 2026-04-08 00:54:37.376030 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.376034 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.376038 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.376041 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.376045 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.376049 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.376052 | orchestrator | 2026-04-08 00:54:37.376056 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-08 00:54:37.376060 | orchestrator | Wednesday 08 April 2026 00:52:28 +0000 (0:00:00.493) 0:08:23.441 ******* 2026-04-08 00:54:37.376063 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.376067 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.376071 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.376080 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.376084 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.376088 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.376091 | orchestrator | 2026-04-08 00:54:37.376095 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-08 00:54:37.376102 | orchestrator | Wednesday 08 April 2026 00:52:29 +0000 (0:00:00.740) 0:08:24.181 ******* 2026-04-08 00:54:37.376106 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.376110 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.376113 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.376117 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.376121 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.376124 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.376128 | orchestrator | 2026-04-08 00:54:37.376132 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-08 00:54:37.376135 | orchestrator | Wednesday 08 April 2026 00:52:30 +0000 (0:00:01.260) 0:08:25.442 ******* 2026-04-08 00:54:37.376139 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.376143 | orchestrator | 2026-04-08 00:54:37.376147 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-08 00:54:37.376161 | orchestrator | Wednesday 08 April 2026 00:52:34 +0000 (0:00:03.165) 0:08:28.608 ******* 2026-04-08 00:54:37.376165 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.376172 | orchestrator | 2026-04-08 00:54:37.376178 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-08 00:54:37.376184 | orchestrator | Wednesday 08 April 2026 00:52:35 +0000 (0:00:01.652) 0:08:30.260 ******* 2026-04-08 00:54:37.376190 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.376196 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.376207 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.376211 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.376215 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.376219 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.376222 | orchestrator | 2026-04-08 00:54:37.376226 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-08 00:54:37.376230 | orchestrator | Wednesday 08 April 2026 00:52:37 +0000 (0:00:01.981) 0:08:32.242 ******* 2026-04-08 00:54:37.376234 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.376237 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.376241 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.376245 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.376248 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.376263 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.376267 | orchestrator | 2026-04-08 00:54:37.376271 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-08 00:54:37.376274 | orchestrator | Wednesday 08 April 2026 00:52:38 +0000 (0:00:01.069) 0:08:33.312 ******* 2026-04-08 00:54:37.376279 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.376284 | orchestrator | 2026-04-08 00:54:37.376288 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-08 00:54:37.376291 | orchestrator | Wednesday 08 April 2026 00:52:39 +0000 (0:00:01.028) 0:08:34.340 ******* 2026-04-08 00:54:37.376296 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.376299 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.376303 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.376307 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.376310 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.376314 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.376318 | orchestrator | 2026-04-08 00:54:37.376322 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-08 00:54:37.376325 | orchestrator | Wednesday 08 April 2026 00:52:41 +0000 (0:00:01.446) 0:08:35.787 ******* 2026-04-08 00:54:37.376329 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.376333 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.376336 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.376340 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.376344 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.376348 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.376351 | orchestrator | 2026-04-08 00:54:37.376355 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-08 00:54:37.376359 | orchestrator | Wednesday 08 April 2026 00:52:44 +0000 (0:00:03.207) 0:08:38.994 ******* 2026-04-08 00:54:37.376363 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.376366 | orchestrator | 2026-04-08 00:54:37.376370 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-08 00:54:37.376376 | orchestrator | Wednesday 08 April 2026 00:52:45 +0000 (0:00:01.057) 0:08:40.052 ******* 2026-04-08 00:54:37.376382 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.376388 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.376394 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.376400 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.376406 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.376413 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.376419 | orchestrator | 2026-04-08 00:54:37.376426 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-08 00:54:37.376432 | orchestrator | Wednesday 08 April 2026 00:52:45 +0000 (0:00:00.511) 0:08:40.563 ******* 2026-04-08 00:54:37.376439 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:54:37.376446 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:54:37.376457 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.376464 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:54:37.376471 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.376475 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.376479 | orchestrator | 2026-04-08 00:54:37.376483 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-08 00:54:37.376486 | orchestrator | Wednesday 08 April 2026 00:52:48 +0000 (0:00:02.215) 0:08:42.779 ******* 2026-04-08 00:54:37.376490 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:37.376494 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:37.376498 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:37.376501 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.376509 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.376513 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.376517 | orchestrator | 2026-04-08 00:54:37.376520 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-08 00:54:37.376524 | orchestrator | 2026-04-08 00:54:37.376528 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-08 00:54:37.376534 | orchestrator | Wednesday 08 April 2026 00:52:49 +0000 (0:00:00.911) 0:08:43.691 ******* 2026-04-08 00:54:37.376539 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.376543 | orchestrator | 2026-04-08 00:54:37.376547 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-08 00:54:37.376550 | orchestrator | Wednesday 08 April 2026 00:52:49 +0000 (0:00:00.433) 0:08:44.124 ******* 2026-04-08 00:54:37.376554 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.376558 | orchestrator | 2026-04-08 00:54:37.376562 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-08 00:54:37.376565 | orchestrator | Wednesday 08 April 2026 00:52:50 +0000 (0:00:00.577) 0:08:44.701 ******* 2026-04-08 00:54:37.376569 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.376573 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.376577 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.376580 | orchestrator | 2026-04-08 00:54:37.376584 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-08 00:54:37.376588 | orchestrator | Wednesday 08 April 2026 00:52:50 +0000 (0:00:00.293) 0:08:44.994 ******* 2026-04-08 00:54:37.376592 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.376595 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.376599 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.376603 | orchestrator | 2026-04-08 00:54:37.376606 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-08 00:54:37.376610 | orchestrator | Wednesday 08 April 2026 00:52:51 +0000 (0:00:00.665) 0:08:45.660 ******* 2026-04-08 00:54:37.376614 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.376618 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.376621 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.376625 | orchestrator | 2026-04-08 00:54:37.376629 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-08 00:54:37.376633 | orchestrator | Wednesday 08 April 2026 00:52:51 +0000 (0:00:00.735) 0:08:46.395 ******* 2026-04-08 00:54:37.376636 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.376640 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.376644 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.376647 | orchestrator | 2026-04-08 00:54:37.376651 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-08 00:54:37.376655 | orchestrator | Wednesday 08 April 2026 00:52:52 +0000 (0:00:00.917) 0:08:47.313 ******* 2026-04-08 00:54:37.376659 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.376662 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.376666 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.376670 | orchestrator | 2026-04-08 00:54:37.376676 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-08 00:54:37.376680 | orchestrator | Wednesday 08 April 2026 00:52:52 +0000 (0:00:00.264) 0:08:47.577 ******* 2026-04-08 00:54:37.376684 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.376688 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.376694 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.376700 | orchestrator | 2026-04-08 00:54:37.376706 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-08 00:54:37.376712 | orchestrator | Wednesday 08 April 2026 00:52:53 +0000 (0:00:00.287) 0:08:47.865 ******* 2026-04-08 00:54:37.376718 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.376724 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.376737 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.376742 | orchestrator | 2026-04-08 00:54:37.376745 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-08 00:54:37.376749 | orchestrator | Wednesday 08 April 2026 00:52:53 +0000 (0:00:00.250) 0:08:48.115 ******* 2026-04-08 00:54:37.376753 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.376757 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.376760 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.376764 | orchestrator | 2026-04-08 00:54:37.376768 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-08 00:54:37.376772 | orchestrator | Wednesday 08 April 2026 00:52:54 +0000 (0:00:00.906) 0:08:49.022 ******* 2026-04-08 00:54:37.376775 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.376779 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.376783 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.376786 | orchestrator | 2026-04-08 00:54:37.376790 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-08 00:54:37.376794 | orchestrator | Wednesday 08 April 2026 00:52:55 +0000 (0:00:00.730) 0:08:49.752 ******* 2026-04-08 00:54:37.376798 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.376801 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.376805 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.376809 | orchestrator | 2026-04-08 00:54:37.376813 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-08 00:54:37.376816 | orchestrator | Wednesday 08 April 2026 00:52:55 +0000 (0:00:00.286) 0:08:50.038 ******* 2026-04-08 00:54:37.376820 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.376824 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.376827 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.376831 | orchestrator | 2026-04-08 00:54:37.376835 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-08 00:54:37.376838 | orchestrator | Wednesday 08 April 2026 00:52:55 +0000 (0:00:00.280) 0:08:50.319 ******* 2026-04-08 00:54:37.376842 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.376846 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.376850 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.376853 | orchestrator | 2026-04-08 00:54:37.376857 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-08 00:54:37.376861 | orchestrator | Wednesday 08 April 2026 00:52:56 +0000 (0:00:00.432) 0:08:50.751 ******* 2026-04-08 00:54:37.376867 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.376871 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.376875 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.376879 | orchestrator | 2026-04-08 00:54:37.376883 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-08 00:54:37.376889 | orchestrator | Wednesday 08 April 2026 00:52:56 +0000 (0:00:00.272) 0:08:51.023 ******* 2026-04-08 00:54:37.376892 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.376896 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.376900 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.376904 | orchestrator | 2026-04-08 00:54:37.376907 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-08 00:54:37.376918 | orchestrator | Wednesday 08 April 2026 00:52:56 +0000 (0:00:00.290) 0:08:51.314 ******* 2026-04-08 00:54:37.376922 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.376926 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.376929 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.376933 | orchestrator | 2026-04-08 00:54:37.376937 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-08 00:54:37.376941 | orchestrator | Wednesday 08 April 2026 00:52:56 +0000 (0:00:00.257) 0:08:51.571 ******* 2026-04-08 00:54:37.376944 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.376948 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.376952 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.376958 | orchestrator | 2026-04-08 00:54:37.376964 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-08 00:54:37.376970 | orchestrator | Wednesday 08 April 2026 00:52:57 +0000 (0:00:00.548) 0:08:52.120 ******* 2026-04-08 00:54:37.376976 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.376982 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.376988 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.376994 | orchestrator | 2026-04-08 00:54:37.377000 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-08 00:54:37.377006 | orchestrator | Wednesday 08 April 2026 00:52:57 +0000 (0:00:00.284) 0:08:52.405 ******* 2026-04-08 00:54:37.377013 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.377020 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.377026 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.377032 | orchestrator | 2026-04-08 00:54:37.377038 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-08 00:54:37.377043 | orchestrator | Wednesday 08 April 2026 00:52:58 +0000 (0:00:00.310) 0:08:52.716 ******* 2026-04-08 00:54:37.377047 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.377051 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.377055 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.377058 | orchestrator | 2026-04-08 00:54:37.377062 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-08 00:54:37.377066 | orchestrator | Wednesday 08 April 2026 00:52:58 +0000 (0:00:00.658) 0:08:53.375 ******* 2026-04-08 00:54:37.377069 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.377074 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.377080 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-08 00:54:37.377087 | orchestrator | 2026-04-08 00:54:37.377093 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-08 00:54:37.377099 | orchestrator | Wednesday 08 April 2026 00:52:59 +0000 (0:00:00.350) 0:08:53.726 ******* 2026-04-08 00:54:37.377105 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:54:37.377111 | orchestrator | 2026-04-08 00:54:37.377118 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-08 00:54:37.377123 | orchestrator | Wednesday 08 April 2026 00:53:00 +0000 (0:00:01.696) 0:08:55.422 ******* 2026-04-08 00:54:37.377131 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-08 00:54:37.377138 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.377144 | orchestrator | 2026-04-08 00:54:37.377151 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-08 00:54:37.377157 | orchestrator | Wednesday 08 April 2026 00:53:01 +0000 (0:00:00.178) 0:08:55.600 ******* 2026-04-08 00:54:37.377165 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-08 00:54:37.377176 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-08 00:54:37.377188 | orchestrator | 2026-04-08 00:54:37.377194 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-08 00:54:37.377201 | orchestrator | Wednesday 08 April 2026 00:53:07 +0000 (0:00:06.624) 0:09:02.225 ******* 2026-04-08 00:54:37.377207 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:54:37.377213 | orchestrator | 2026-04-08 00:54:37.377220 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-08 00:54:37.377226 | orchestrator | Wednesday 08 April 2026 00:53:10 +0000 (0:00:02.910) 0:09:05.136 ******* 2026-04-08 00:54:37.377233 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.377239 | orchestrator | 2026-04-08 00:54:37.377246 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-08 00:54:37.377270 | orchestrator | Wednesday 08 April 2026 00:53:11 +0000 (0:00:00.854) 0:09:05.990 ******* 2026-04-08 00:54:37.377275 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-08 00:54:37.377278 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-08 00:54:37.377285 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-08 00:54:37.377289 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-08 00:54:37.377293 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-08 00:54:37.377296 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-08 00:54:37.377300 | orchestrator | 2026-04-08 00:54:37.377304 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-08 00:54:37.377308 | orchestrator | Wednesday 08 April 2026 00:53:12 +0000 (0:00:01.067) 0:09:07.057 ******* 2026-04-08 00:54:37.377311 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:37.377315 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-08 00:54:37.377319 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-08 00:54:37.377323 | orchestrator | 2026-04-08 00:54:37.377327 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-08 00:54:37.377330 | orchestrator | Wednesday 08 April 2026 00:53:14 +0000 (0:00:01.757) 0:09:08.815 ******* 2026-04-08 00:54:37.377334 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-08 00:54:37.377338 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-08 00:54:37.377342 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-08 00:54:37.377345 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-08 00:54:37.377349 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.377353 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.377357 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-08 00:54:37.377360 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-08 00:54:37.377364 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.377368 | orchestrator | 2026-04-08 00:54:37.377372 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-08 00:54:37.377375 | orchestrator | Wednesday 08 April 2026 00:53:15 +0000 (0:00:01.177) 0:09:09.993 ******* 2026-04-08 00:54:37.377379 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.377383 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.377387 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.377390 | orchestrator | 2026-04-08 00:54:37.377394 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-08 00:54:37.377398 | orchestrator | Wednesday 08 April 2026 00:53:18 +0000 (0:00:02.952) 0:09:12.945 ******* 2026-04-08 00:54:37.377407 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.377413 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.377420 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.377431 | orchestrator | 2026-04-08 00:54:37.377437 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-08 00:54:37.377443 | orchestrator | Wednesday 08 April 2026 00:53:18 +0000 (0:00:00.583) 0:09:13.529 ******* 2026-04-08 00:54:37.377448 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.377454 | orchestrator | 2026-04-08 00:54:37.377460 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-08 00:54:37.377466 | orchestrator | Wednesday 08 April 2026 00:53:19 +0000 (0:00:00.953) 0:09:14.482 ******* 2026-04-08 00:54:37.377471 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.377477 | orchestrator | 2026-04-08 00:54:37.377483 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-08 00:54:37.377488 | orchestrator | Wednesday 08 April 2026 00:53:20 +0000 (0:00:00.869) 0:09:15.352 ******* 2026-04-08 00:54:37.377494 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.377500 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.377505 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.377511 | orchestrator | 2026-04-08 00:54:37.377517 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-08 00:54:37.377523 | orchestrator | Wednesday 08 April 2026 00:53:22 +0000 (0:00:01.447) 0:09:16.800 ******* 2026-04-08 00:54:37.377529 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.377535 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.377541 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.377547 | orchestrator | 2026-04-08 00:54:37.377553 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-08 00:54:37.377559 | orchestrator | Wednesday 08 April 2026 00:53:23 +0000 (0:00:01.402) 0:09:18.202 ******* 2026-04-08 00:54:37.377566 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.377572 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.377577 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.377581 | orchestrator | 2026-04-08 00:54:37.377585 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-08 00:54:37.377588 | orchestrator | Wednesday 08 April 2026 00:53:25 +0000 (0:00:02.289) 0:09:20.492 ******* 2026-04-08 00:54:37.377592 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.377596 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.377600 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.377603 | orchestrator | 2026-04-08 00:54:37.377607 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-08 00:54:37.377611 | orchestrator | Wednesday 08 April 2026 00:53:28 +0000 (0:00:02.373) 0:09:22.866 ******* 2026-04-08 00:54:37.377614 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.377620 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.377626 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.377632 | orchestrator | 2026-04-08 00:54:37.377637 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-08 00:54:37.377643 | orchestrator | Wednesday 08 April 2026 00:53:29 +0000 (0:00:01.554) 0:09:24.420 ******* 2026-04-08 00:54:37.377654 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.377660 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.377666 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.377673 | orchestrator | 2026-04-08 00:54:37.377679 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-08 00:54:37.377689 | orchestrator | Wednesday 08 April 2026 00:53:30 +0000 (0:00:00.733) 0:09:25.154 ******* 2026-04-08 00:54:37.377696 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.377702 | orchestrator | 2026-04-08 00:54:37.377713 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-08 00:54:37.377719 | orchestrator | Wednesday 08 April 2026 00:53:31 +0000 (0:00:00.563) 0:09:25.717 ******* 2026-04-08 00:54:37.377725 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.377731 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.377737 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.377744 | orchestrator | 2026-04-08 00:54:37.377750 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-08 00:54:37.377757 | orchestrator | Wednesday 08 April 2026 00:53:31 +0000 (0:00:00.548) 0:09:26.266 ******* 2026-04-08 00:54:37.377762 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.377765 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.377769 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.377773 | orchestrator | 2026-04-08 00:54:37.377776 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-08 00:54:37.377780 | orchestrator | Wednesday 08 April 2026 00:53:32 +0000 (0:00:01.158) 0:09:27.424 ******* 2026-04-08 00:54:37.377784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:54:37.377787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:54:37.377791 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:54:37.377795 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.377798 | orchestrator | 2026-04-08 00:54:37.377802 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-08 00:54:37.377806 | orchestrator | Wednesday 08 April 2026 00:53:33 +0000 (0:00:00.613) 0:09:28.038 ******* 2026-04-08 00:54:37.377810 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.377813 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.377817 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.377821 | orchestrator | 2026-04-08 00:54:37.377824 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-08 00:54:37.377828 | orchestrator | 2026-04-08 00:54:37.377832 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-08 00:54:37.377836 | orchestrator | Wednesday 08 April 2026 00:53:33 +0000 (0:00:00.545) 0:09:28.583 ******* 2026-04-08 00:54:37.377839 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.377843 | orchestrator | 2026-04-08 00:54:37.377847 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-08 00:54:37.377851 | orchestrator | Wednesday 08 April 2026 00:53:34 +0000 (0:00:00.765) 0:09:29.349 ******* 2026-04-08 00:54:37.377855 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.377858 | orchestrator | 2026-04-08 00:54:37.377862 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-08 00:54:37.377866 | orchestrator | Wednesday 08 April 2026 00:53:35 +0000 (0:00:00.551) 0:09:29.900 ******* 2026-04-08 00:54:37.377870 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.377873 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.377877 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.377881 | orchestrator | 2026-04-08 00:54:37.377884 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-08 00:54:37.377888 | orchestrator | Wednesday 08 April 2026 00:53:36 +0000 (0:00:00.734) 0:09:30.635 ******* 2026-04-08 00:54:37.377892 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.377896 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.377899 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.377903 | orchestrator | 2026-04-08 00:54:37.377907 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-08 00:54:37.377910 | orchestrator | Wednesday 08 April 2026 00:53:36 +0000 (0:00:00.770) 0:09:31.406 ******* 2026-04-08 00:54:37.377914 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.377918 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.377924 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.377928 | orchestrator | 2026-04-08 00:54:37.377932 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-08 00:54:37.377936 | orchestrator | Wednesday 08 April 2026 00:53:37 +0000 (0:00:00.712) 0:09:32.118 ******* 2026-04-08 00:54:37.377939 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.377943 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.377947 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.377950 | orchestrator | 2026-04-08 00:54:37.377954 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-08 00:54:37.377958 | orchestrator | Wednesday 08 April 2026 00:53:38 +0000 (0:00:00.759) 0:09:32.878 ******* 2026-04-08 00:54:37.377961 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.377965 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.377969 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.377972 | orchestrator | 2026-04-08 00:54:37.377976 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-08 00:54:37.377980 | orchestrator | Wednesday 08 April 2026 00:53:38 +0000 (0:00:00.630) 0:09:33.508 ******* 2026-04-08 00:54:37.377984 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.377987 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.377991 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.377995 | orchestrator | 2026-04-08 00:54:37.377998 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-08 00:54:37.378002 | orchestrator | Wednesday 08 April 2026 00:53:39 +0000 (0:00:00.303) 0:09:33.812 ******* 2026-04-08 00:54:37.378006 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.378009 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.378042 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.378046 | orchestrator | 2026-04-08 00:54:37.378050 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-08 00:54:37.378054 | orchestrator | Wednesday 08 April 2026 00:53:39 +0000 (0:00:00.309) 0:09:34.121 ******* 2026-04-08 00:54:37.378057 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.378064 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.378067 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.378071 | orchestrator | 2026-04-08 00:54:37.378075 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-08 00:54:37.378079 | orchestrator | Wednesday 08 April 2026 00:53:40 +0000 (0:00:00.712) 0:09:34.834 ******* 2026-04-08 00:54:37.378083 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.378088 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.378094 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.378101 | orchestrator | 2026-04-08 00:54:37.378108 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-08 00:54:37.378112 | orchestrator | Wednesday 08 April 2026 00:53:41 +0000 (0:00:01.154) 0:09:35.988 ******* 2026-04-08 00:54:37.378116 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.378120 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.378124 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.378127 | orchestrator | 2026-04-08 00:54:37.378131 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-08 00:54:37.378135 | orchestrator | Wednesday 08 April 2026 00:53:41 +0000 (0:00:00.326) 0:09:36.315 ******* 2026-04-08 00:54:37.378138 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.378142 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.378146 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.378149 | orchestrator | 2026-04-08 00:54:37.378153 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-08 00:54:37.378157 | orchestrator | Wednesday 08 April 2026 00:53:42 +0000 (0:00:00.297) 0:09:36.613 ******* 2026-04-08 00:54:37.378161 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.378164 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.378168 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.378172 | orchestrator | 2026-04-08 00:54:37.378176 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-08 00:54:37.378182 | orchestrator | Wednesday 08 April 2026 00:53:42 +0000 (0:00:00.319) 0:09:36.932 ******* 2026-04-08 00:54:37.378186 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.378189 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.378195 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.378202 | orchestrator | 2026-04-08 00:54:37.378208 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-08 00:54:37.378212 | orchestrator | Wednesday 08 April 2026 00:53:42 +0000 (0:00:00.619) 0:09:37.551 ******* 2026-04-08 00:54:37.378216 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.378220 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.378223 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.378227 | orchestrator | 2026-04-08 00:54:37.378231 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-08 00:54:37.378234 | orchestrator | Wednesday 08 April 2026 00:53:43 +0000 (0:00:00.346) 0:09:37.897 ******* 2026-04-08 00:54:37.378238 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.378242 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.378246 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.378264 | orchestrator | 2026-04-08 00:54:37.378268 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-08 00:54:37.378272 | orchestrator | Wednesday 08 April 2026 00:53:43 +0000 (0:00:00.299) 0:09:38.197 ******* 2026-04-08 00:54:37.378276 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.378280 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.378283 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.378287 | orchestrator | 2026-04-08 00:54:37.378291 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-08 00:54:37.378294 | orchestrator | Wednesday 08 April 2026 00:53:43 +0000 (0:00:00.306) 0:09:38.503 ******* 2026-04-08 00:54:37.378298 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.378302 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.378305 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.378309 | orchestrator | 2026-04-08 00:54:37.378313 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-08 00:54:37.378316 | orchestrator | Wednesday 08 April 2026 00:53:44 +0000 (0:00:00.596) 0:09:39.100 ******* 2026-04-08 00:54:37.378320 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.378324 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.378327 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.378331 | orchestrator | 2026-04-08 00:54:37.378335 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-08 00:54:37.378339 | orchestrator | Wednesday 08 April 2026 00:53:44 +0000 (0:00:00.345) 0:09:39.446 ******* 2026-04-08 00:54:37.378342 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.378346 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.378350 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.378353 | orchestrator | 2026-04-08 00:54:37.378357 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-08 00:54:37.378361 | orchestrator | Wednesday 08 April 2026 00:53:45 +0000 (0:00:00.516) 0:09:39.963 ******* 2026-04-08 00:54:37.378364 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.378368 | orchestrator | 2026-04-08 00:54:37.378372 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-08 00:54:37.378376 | orchestrator | Wednesday 08 April 2026 00:53:46 +0000 (0:00:00.773) 0:09:40.737 ******* 2026-04-08 00:54:37.378379 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:37.378383 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-08 00:54:37.378387 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-08 00:54:37.378391 | orchestrator | 2026-04-08 00:54:37.378394 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-08 00:54:37.378401 | orchestrator | Wednesday 08 April 2026 00:53:47 +0000 (0:00:01.811) 0:09:42.548 ******* 2026-04-08 00:54:37.378405 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-08 00:54:37.378411 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-08 00:54:37.378415 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.378419 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-08 00:54:37.378422 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-08 00:54:37.378426 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.378433 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-08 00:54:37.378437 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-08 00:54:37.378440 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.378444 | orchestrator | 2026-04-08 00:54:37.378448 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-08 00:54:37.378451 | orchestrator | Wednesday 08 April 2026 00:53:49 +0000 (0:00:01.280) 0:09:43.828 ******* 2026-04-08 00:54:37.378455 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.378459 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.378463 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.378466 | orchestrator | 2026-04-08 00:54:37.378470 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-08 00:54:37.378474 | orchestrator | Wednesday 08 April 2026 00:53:49 +0000 (0:00:00.554) 0:09:44.383 ******* 2026-04-08 00:54:37.378478 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.378481 | orchestrator | 2026-04-08 00:54:37.378485 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-08 00:54:37.378489 | orchestrator | Wednesday 08 April 2026 00:53:50 +0000 (0:00:00.592) 0:09:44.976 ******* 2026-04-08 00:54:37.378493 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-08 00:54:37.378497 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-08 00:54:37.378501 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-08 00:54:37.378505 | orchestrator | 2026-04-08 00:54:37.378508 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-08 00:54:37.378512 | orchestrator | Wednesday 08 April 2026 00:53:51 +0000 (0:00:00.841) 0:09:45.818 ******* 2026-04-08 00:54:37.378516 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:37.378520 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-08 00:54:37.378523 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:37.378527 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-08 00:54:37.378531 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:37.378535 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-08 00:54:37.378538 | orchestrator | 2026-04-08 00:54:37.378542 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-08 00:54:37.378546 | orchestrator | Wednesday 08 April 2026 00:53:55 +0000 (0:00:04.070) 0:09:49.888 ******* 2026-04-08 00:54:37.378550 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:37.378553 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-08 00:54:37.378557 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:37.378564 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-08 00:54:37.378567 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:37.378571 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-08 00:54:37.378575 | orchestrator | 2026-04-08 00:54:37.378578 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-08 00:54:37.378582 | orchestrator | Wednesday 08 April 2026 00:53:57 +0000 (0:00:01.950) 0:09:51.838 ******* 2026-04-08 00:54:37.378586 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-08 00:54:37.378590 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.378593 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-08 00:54:37.378597 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.378601 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-08 00:54:37.378606 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.378612 | orchestrator | 2026-04-08 00:54:37.378618 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-08 00:54:37.378624 | orchestrator | Wednesday 08 April 2026 00:53:58 +0000 (0:00:01.265) 0:09:53.104 ******* 2026-04-08 00:54:37.378630 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-08 00:54:37.378636 | orchestrator | 2026-04-08 00:54:37.378642 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-08 00:54:37.378648 | orchestrator | Wednesday 08 April 2026 00:53:58 +0000 (0:00:00.209) 0:09:53.313 ******* 2026-04-08 00:54:37.378655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:54:37.378665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:54:37.378671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:54:37.378680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:54:37.378687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:54:37.378693 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.378699 | orchestrator | 2026-04-08 00:54:37.378705 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-08 00:54:37.378712 | orchestrator | Wednesday 08 April 2026 00:53:59 +0000 (0:00:00.707) 0:09:54.021 ******* 2026-04-08 00:54:37.378717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:54:37.378721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:54:37.378724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:54:37.378728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:54:37.378732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:54:37.378736 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.378740 | orchestrator | 2026-04-08 00:54:37.378746 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-08 00:54:37.378752 | orchestrator | Wednesday 08 April 2026 00:54:00 +0000 (0:00:00.666) 0:09:54.688 ******* 2026-04-08 00:54:37.378759 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-08 00:54:37.378770 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-08 00:54:37.378777 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-08 00:54:37.378783 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-08 00:54:37.378790 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-08 00:54:37.378796 | orchestrator | 2026-04-08 00:54:37.378802 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-08 00:54:37.378809 | orchestrator | Wednesday 08 April 2026 00:54:23 +0000 (0:00:23.843) 0:10:18.532 ******* 2026-04-08 00:54:37.378815 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.378821 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.378828 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.378832 | orchestrator | 2026-04-08 00:54:37.378836 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-08 00:54:37.378840 | orchestrator | Wednesday 08 April 2026 00:54:24 +0000 (0:00:00.575) 0:10:19.108 ******* 2026-04-08 00:54:37.378843 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.378847 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.378851 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.378854 | orchestrator | 2026-04-08 00:54:37.378858 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-08 00:54:37.378862 | orchestrator | Wednesday 08 April 2026 00:54:24 +0000 (0:00:00.315) 0:10:19.423 ******* 2026-04-08 00:54:37.378866 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.378869 | orchestrator | 2026-04-08 00:54:37.378873 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-08 00:54:37.378877 | orchestrator | Wednesday 08 April 2026 00:54:25 +0000 (0:00:00.518) 0:10:19.942 ******* 2026-04-08 00:54:37.378880 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.378884 | orchestrator | 2026-04-08 00:54:37.378888 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-08 00:54:37.378891 | orchestrator | Wednesday 08 April 2026 00:54:26 +0000 (0:00:00.731) 0:10:20.674 ******* 2026-04-08 00:54:37.378895 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.378899 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.378903 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.378906 | orchestrator | 2026-04-08 00:54:37.378910 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-08 00:54:37.378914 | orchestrator | Wednesday 08 April 2026 00:54:27 +0000 (0:00:01.252) 0:10:21.926 ******* 2026-04-08 00:54:37.378917 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.378921 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.378925 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.378929 | orchestrator | 2026-04-08 00:54:37.378936 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-08 00:54:37.378940 | orchestrator | Wednesday 08 April 2026 00:54:28 +0000 (0:00:01.182) 0:10:23.109 ******* 2026-04-08 00:54:37.378943 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:54:37.378949 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:54:37.378953 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:54:37.378957 | orchestrator | 2026-04-08 00:54:37.378960 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-08 00:54:37.378964 | orchestrator | Wednesday 08 April 2026 00:54:30 +0000 (0:00:02.375) 0:10:25.485 ******* 2026-04-08 00:54:37.378973 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-08 00:54:37.378977 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-08 00:54:37.378981 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-08 00:54:37.378985 | orchestrator | 2026-04-08 00:54:37.378988 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-08 00:54:37.378992 | orchestrator | Wednesday 08 April 2026 00:54:33 +0000 (0:00:02.678) 0:10:28.163 ******* 2026-04-08 00:54:37.378996 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.378999 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.379003 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.379007 | orchestrator | 2026-04-08 00:54:37.379010 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-08 00:54:37.379014 | orchestrator | Wednesday 08 April 2026 00:54:34 +0000 (0:00:00.666) 0:10:28.830 ******* 2026-04-08 00:54:37.379018 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:37.379021 | orchestrator | 2026-04-08 00:54:37.379025 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-08 00:54:37.379029 | orchestrator | Wednesday 08 April 2026 00:54:34 +0000 (0:00:00.523) 0:10:29.353 ******* 2026-04-08 00:54:37.379032 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.379036 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.379040 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.379044 | orchestrator | 2026-04-08 00:54:37.379047 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-08 00:54:37.379051 | orchestrator | Wednesday 08 April 2026 00:54:35 +0000 (0:00:00.312) 0:10:29.666 ******* 2026-04-08 00:54:37.379055 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.379058 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:37.379062 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:37.379066 | orchestrator | 2026-04-08 00:54:37.379069 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-08 00:54:37.379073 | orchestrator | Wednesday 08 April 2026 00:54:35 +0000 (0:00:00.592) 0:10:30.259 ******* 2026-04-08 00:54:37.379077 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:54:37.379081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:54:37.379084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:54:37.379088 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:37.379092 | orchestrator | 2026-04-08 00:54:37.379095 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-08 00:54:37.379099 | orchestrator | Wednesday 08 April 2026 00:54:36 +0000 (0:00:00.642) 0:10:30.901 ******* 2026-04-08 00:54:37.379103 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:37.379106 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:37.379110 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:37.379114 | orchestrator | 2026-04-08 00:54:37.379118 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:54:37.379121 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2026-04-08 00:54:37.379126 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-08 00:54:37.379130 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-08 00:54:37.379133 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2026-04-08 00:54:37.379140 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-08 00:54:37.379144 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-08 00:54:37.379148 | orchestrator | 2026-04-08 00:54:37.379152 | orchestrator | 2026-04-08 00:54:37.379155 | orchestrator | 2026-04-08 00:54:37.379159 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:54:37.379163 | orchestrator | Wednesday 08 April 2026 00:54:36 +0000 (0:00:00.269) 0:10:31.171 ******* 2026-04-08 00:54:37.379166 | orchestrator | =============================================================================== 2026-04-08 00:54:37.379170 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 54.16s 2026-04-08 00:54:37.379174 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 38.63s 2026-04-08 00:54:37.379180 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 23.84s 2026-04-08 00:54:37.379184 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.69s 2026-04-08 00:54:37.379188 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.04s 2026-04-08 00:54:37.379194 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.78s 2026-04-08 00:54:37.379197 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.84s 2026-04-08 00:54:37.379201 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.02s 2026-04-08 00:54:37.379205 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.92s 2026-04-08 00:54:37.379209 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.62s 2026-04-08 00:54:37.379212 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 6.49s 2026-04-08 00:54:37.379216 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.14s 2026-04-08 00:54:37.379220 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.82s 2026-04-08 00:54:37.379223 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.17s 2026-04-08 00:54:37.379227 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.07s 2026-04-08 00:54:37.379231 | orchestrator | ceph-facts : Set_fact _container_exec_cmd ------------------------------- 3.91s 2026-04-08 00:54:37.379234 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.66s 2026-04-08 00:54:37.379238 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.63s 2026-04-08 00:54:37.379242 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.62s 2026-04-08 00:54:37.379245 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.46s 2026-04-08 00:54:37.379258 | orchestrator | 2026-04-08 00:54:37 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:37.379265 | orchestrator | 2026-04-08 00:54:37 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:37.379271 | orchestrator | 2026-04-08 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:40.432791 | orchestrator | 2026-04-08 00:54:40 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:40.433283 | orchestrator | 2026-04-08 00:54:40 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:54:40.433305 | orchestrator | 2026-04-08 00:54:40 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:40.433310 | orchestrator | 2026-04-08 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:43.493069 | orchestrator | 2026-04-08 00:54:43 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:43.495478 | orchestrator | 2026-04-08 00:54:43 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:54:43.497243 | orchestrator | 2026-04-08 00:54:43 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:43.497306 | orchestrator | 2026-04-08 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:46.546342 | orchestrator | 2026-04-08 00:54:46 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:46.550072 | orchestrator | 2026-04-08 00:54:46 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:54:46.552439 | orchestrator | 2026-04-08 00:54:46 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:46.552975 | orchestrator | 2026-04-08 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:49.596612 | orchestrator | 2026-04-08 00:54:49 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:49.598858 | orchestrator | 2026-04-08 00:54:49 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:54:49.600800 | orchestrator | 2026-04-08 00:54:49 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:49.601020 | orchestrator | 2026-04-08 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:52.649443 | orchestrator | 2026-04-08 00:54:52 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:52.652378 | orchestrator | 2026-04-08 00:54:52 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:54:52.654130 | orchestrator | 2026-04-08 00:54:52 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:52.654191 | orchestrator | 2026-04-08 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:55.712426 | orchestrator | 2026-04-08 00:54:55 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:55.714101 | orchestrator | 2026-04-08 00:54:55 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:54:55.716095 | orchestrator | 2026-04-08 00:54:55 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:55.716154 | orchestrator | 2026-04-08 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:58.765622 | orchestrator | 2026-04-08 00:54:58 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:54:58.765828 | orchestrator | 2026-04-08 00:54:58 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:54:58.766532 | orchestrator | 2026-04-08 00:54:58 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:54:58.766559 | orchestrator | 2026-04-08 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:01.820452 | orchestrator | 2026-04-08 00:55:01 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:55:01.822865 | orchestrator | 2026-04-08 00:55:01 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:01.824813 | orchestrator | 2026-04-08 00:55:01 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:55:01.824868 | orchestrator | 2026-04-08 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:04.870623 | orchestrator | 2026-04-08 00:55:04 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:55:04.873117 | orchestrator | 2026-04-08 00:55:04 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:04.875161 | orchestrator | 2026-04-08 00:55:04 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:55:04.875425 | orchestrator | 2026-04-08 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:07.926572 | orchestrator | 2026-04-08 00:55:07 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:55:07.927707 | orchestrator | 2026-04-08 00:55:07 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:07.929369 | orchestrator | 2026-04-08 00:55:07 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:55:07.929411 | orchestrator | 2026-04-08 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:10.975101 | orchestrator | 2026-04-08 00:55:10 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:55:10.977317 | orchestrator | 2026-04-08 00:55:10 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:10.980058 | orchestrator | 2026-04-08 00:55:10 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:55:10.980119 | orchestrator | 2026-04-08 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:14.033574 | orchestrator | 2026-04-08 00:55:14 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:55:14.037381 | orchestrator | 2026-04-08 00:55:14 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:14.039734 | orchestrator | 2026-04-08 00:55:14 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:55:14.039903 | orchestrator | 2026-04-08 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:17.085597 | orchestrator | 2026-04-08 00:55:17 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:55:17.086546 | orchestrator | 2026-04-08 00:55:17 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:17.087691 | orchestrator | 2026-04-08 00:55:17 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:55:17.087742 | orchestrator | 2026-04-08 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:20.146096 | orchestrator | 2026-04-08 00:55:20 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:55:20.148129 | orchestrator | 2026-04-08 00:55:20 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:20.149849 | orchestrator | 2026-04-08 00:55:20 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:55:20.150557 | orchestrator | 2026-04-08 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:23.207399 | orchestrator | 2026-04-08 00:55:23 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:55:23.208433 | orchestrator | 2026-04-08 00:55:23 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:23.210218 | orchestrator | 2026-04-08 00:55:23 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:55:23.210271 | orchestrator | 2026-04-08 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:26.268882 | orchestrator | 2026-04-08 00:55:26 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:55:26.272733 | orchestrator | 2026-04-08 00:55:26 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:26.276068 | orchestrator | 2026-04-08 00:55:26 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:55:26.276128 | orchestrator | 2026-04-08 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:29.323752 | orchestrator | 2026-04-08 00:55:29 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:55:29.325347 | orchestrator | 2026-04-08 00:55:29 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:29.328031 | orchestrator | 2026-04-08 00:55:29 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:55:29.328366 | orchestrator | 2026-04-08 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:32.375794 | orchestrator | 2026-04-08 00:55:32 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:55:32.377713 | orchestrator | 2026-04-08 00:55:32 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:32.379681 | orchestrator | 2026-04-08 00:55:32 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:55:32.379724 | orchestrator | 2026-04-08 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:35.423714 | orchestrator | 2026-04-08 00:55:35 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:55:35.425708 | orchestrator | 2026-04-08 00:55:35 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:35.427499 | orchestrator | 2026-04-08 00:55:35 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:55:35.427669 | orchestrator | 2026-04-08 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:38.472331 | orchestrator | 2026-04-08 00:55:38 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state STARTED 2026-04-08 00:55:38.472421 | orchestrator | 2026-04-08 00:55:38 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:38.473647 | orchestrator | 2026-04-08 00:55:38 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:55:38.473683 | orchestrator | 2026-04-08 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:41.533759 | orchestrator | 2026-04-08 00:55:41 | INFO  | Task 861bc8ef-3b0e-42d8-94e0-258e7df726b8 is in state SUCCESS 2026-04-08 00:55:41.534569 | orchestrator | 2026-04-08 00:55:41.534598 | orchestrator | 2026-04-08 00:55:41.534604 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:55:41.534609 | orchestrator | 2026-04-08 00:55:41.534614 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:55:41.534619 | orchestrator | Wednesday 08 April 2026 00:52:52 +0000 (0:00:00.286) 0:00:00.286 ******* 2026-04-08 00:55:41.534623 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:55:41.534628 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:55:41.534633 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:55:41.534638 | orchestrator | 2026-04-08 00:55:41.534643 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:55:41.534649 | orchestrator | Wednesday 08 April 2026 00:52:52 +0000 (0:00:00.258) 0:00:00.544 ******* 2026-04-08 00:55:41.534656 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-08 00:55:41.534666 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-08 00:55:41.534672 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-08 00:55:41.534679 | orchestrator | 2026-04-08 00:55:41.534686 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-08 00:55:41.534709 | orchestrator | 2026-04-08 00:55:41.534717 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-08 00:55:41.534724 | orchestrator | Wednesday 08 April 2026 00:52:52 +0000 (0:00:00.264) 0:00:00.809 ******* 2026-04-08 00:55:41.534778 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:55:41.534785 | orchestrator | 2026-04-08 00:55:41.534789 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-08 00:55:41.534793 | orchestrator | Wednesday 08 April 2026 00:52:53 +0000 (0:00:00.565) 0:00:01.375 ******* 2026-04-08 00:55:41.534797 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-08 00:55:41.534801 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-08 00:55:41.534805 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-08 00:55:41.534809 | orchestrator | 2026-04-08 00:55:41.534819 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-08 00:55:41.534823 | orchestrator | Wednesday 08 April 2026 00:52:54 +0000 (0:00:01.044) 0:00:02.420 ******* 2026-04-08 00:55:41.534829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-08 00:55:41.534835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-08 00:55:41.534906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-08 00:55:41.534915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-08 00:55:41.534927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-08 00:55:41.534932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-08 00:55:41.534937 | orchestrator | 2026-04-08 00:55:41.534940 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-08 00:55:41.534944 | orchestrator | Wednesday 08 April 2026 00:52:55 +0000 (0:00:01.303) 0:00:03.723 ******* 2026-04-08 00:55:41.534948 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:55:41.534952 | orchestrator | 2026-04-08 00:55:41.534956 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-08 00:55:41.534960 | orchestrator | Wednesday 08 April 2026 00:52:55 +0000 (0:00:00.467) 0:00:04.190 ******* 2026-04-08 00:55:41.534967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-08 00:55:41.534974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-08 00:55:41.534980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-08 00:55:41.534985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-08 00:55:41.534992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-08 00:55:41.535004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-08 00:55:41.535014 | orchestrator | 2026-04-08 00:55:41.535021 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-08 00:55:41.535037 | orchestrator | Wednesday 08 April 2026 00:52:58 +0000 (0:00:02.884) 0:00:07.075 ******* 2026-04-08 00:55:41.535052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-08 00:55:41.535059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-08 00:55:41.535064 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:41.535069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-08 00:55:41.535081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-08 00:55:41.535085 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:41.535091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-08 00:55:41.535095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-08 00:55:41.535100 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:41.535103 | orchestrator | 2026-04-08 00:55:41.535107 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-08 00:55:41.535111 | orchestrator | Wednesday 08 April 2026 00:52:59 +0000 (0:00:00.701) 0:00:07.776 ******* 2026-04-08 00:55:41.535115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-08 00:55:41.535128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-08 00:55:41.535133 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:41.535138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-08 00:55:41.535143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-08 00:55:41.535147 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:41.535151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-08 00:55:41.535162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-08 00:55:41.535166 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:41.535170 | orchestrator | 2026-04-08 00:55:41.535174 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-08 00:55:41.535177 | orchestrator | Wednesday 08 April 2026 00:53:00 +0000 (0:00:00.788) 0:00:08.565 ******* 2026-04-08 00:55:41.535183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-08 00:55:41.535187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-08 00:55:41.535191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-08 00:55:41.535203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-08 00:55:41.535207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-08 00:55:41.535213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-08 00:55:41.535218 | orchestrator | 2026-04-08 00:55:41.535222 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-08 00:55:41.535225 | orchestrator | Wednesday 08 April 2026 00:53:02 +0000 (0:00:02.607) 0:00:11.172 ******* 2026-04-08 00:55:41.535229 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:41.535233 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:55:41.535252 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:55:41.535259 | orchestrator | 2026-04-08 00:55:41.535263 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-08 00:55:41.535267 | orchestrator | Wednesday 08 April 2026 00:53:05 +0000 (0:00:02.770) 0:00:13.943 ******* 2026-04-08 00:55:41.535271 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:41.535275 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:55:41.535278 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:55:41.535282 | orchestrator | 2026-04-08 00:55:41.535286 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-08 00:55:41.535290 | orchestrator | Wednesday 08 April 2026 00:53:07 +0000 (0:00:01.619) 0:00:15.562 ******* 2026-04-08 00:55:41.535294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-08 00:55:41.535300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-08 00:55:41.535305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-08 00:55:41.535309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-08 00:55:41.535316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-08 00:55:41.535338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-08 00:55:41.535345 | orchestrator | 2026-04-08 00:55:41.535351 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-08 00:55:41.535357 | orchestrator | Wednesday 08 April 2026 00:53:09 +0000 (0:00:02.428) 0:00:17.991 ******* 2026-04-08 00:55:41.535362 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:41.535368 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:41.535374 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:41.535380 | orchestrator | 2026-04-08 00:55:41.535386 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-08 00:55:41.535392 | orchestrator | Wednesday 08 April 2026 00:53:10 +0000 (0:00:00.486) 0:00:18.477 ******* 2026-04-08 00:55:41.535398 | orchestrator | 2026-04-08 00:55:41.535405 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-08 00:55:41.535412 | orchestrator | Wednesday 08 April 2026 00:53:10 +0000 (0:00:00.073) 0:00:18.551 ******* 2026-04-08 00:55:41.535416 | orchestrator | 2026-04-08 00:55:41.535419 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-08 00:55:41.535423 | orchestrator | Wednesday 08 April 2026 00:53:10 +0000 (0:00:00.071) 0:00:18.623 ******* 2026-04-08 00:55:41.535427 | orchestrator | 2026-04-08 00:55:41.535431 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-08 00:55:41.535436 | orchestrator | Wednesday 08 April 2026 00:53:10 +0000 (0:00:00.072) 0:00:18.696 ******* 2026-04-08 00:55:41.535440 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:41.535444 | orchestrator | 2026-04-08 00:55:41.535448 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-08 00:55:41.535457 | orchestrator | Wednesday 08 April 2026 00:53:10 +0000 (0:00:00.208) 0:00:18.904 ******* 2026-04-08 00:55:41.535463 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:41.535469 | orchestrator | 2026-04-08 00:55:41.535475 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-08 00:55:41.535481 | orchestrator | Wednesday 08 April 2026 00:53:10 +0000 (0:00:00.187) 0:00:19.092 ******* 2026-04-08 00:55:41.535488 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:41.535494 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:55:41.535500 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:55:41.535504 | orchestrator | 2026-04-08 00:55:41.535507 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-08 00:55:41.535511 | orchestrator | Wednesday 08 April 2026 00:54:14 +0000 (0:01:04.120) 0:01:23.212 ******* 2026-04-08 00:55:41.535515 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:41.535519 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:55:41.535523 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:55:41.535526 | orchestrator | 2026-04-08 00:55:41.535530 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-08 00:55:41.535534 | orchestrator | Wednesday 08 April 2026 00:55:28 +0000 (0:01:13.110) 0:02:36.323 ******* 2026-04-08 00:55:41.535538 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:55:41.535541 | orchestrator | 2026-04-08 00:55:41.535545 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-08 00:55:41.535549 | orchestrator | Wednesday 08 April 2026 00:55:28 +0000 (0:00:00.664) 0:02:36.988 ******* 2026-04-08 00:55:41.535553 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:55:41.535557 | orchestrator | 2026-04-08 00:55:41.535560 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-08 00:55:41.535564 | orchestrator | Wednesday 08 April 2026 00:55:31 +0000 (0:00:02.311) 0:02:39.300 ******* 2026-04-08 00:55:41.535568 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:55:41.535572 | orchestrator | 2026-04-08 00:55:41.535575 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-08 00:55:41.535579 | orchestrator | Wednesday 08 April 2026 00:55:33 +0000 (0:00:02.087) 0:02:41.387 ******* 2026-04-08 00:55:41.535583 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:55:41.535586 | orchestrator | 2026-04-08 00:55:41.535590 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-08 00:55:41.535594 | orchestrator | Wednesday 08 April 2026 00:55:35 +0000 (0:00:02.178) 0:02:43.566 ******* 2026-04-08 00:55:41.535598 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:41.535601 | orchestrator | 2026-04-08 00:55:41.535605 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-08 00:55:41.535609 | orchestrator | Wednesday 08 April 2026 00:55:38 +0000 (0:00:02.686) 0:02:46.252 ******* 2026-04-08 00:55:41.535613 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:41.535616 | orchestrator | 2026-04-08 00:55:41.535620 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:55:41.535624 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-08 00:55:41.535629 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-08 00:55:41.535636 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-08 00:55:41.535640 | orchestrator | 2026-04-08 00:55:41.535644 | orchestrator | 2026-04-08 00:55:41.535648 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:55:41.535652 | orchestrator | Wednesday 08 April 2026 00:55:40 +0000 (0:00:02.611) 0:02:48.864 ******* 2026-04-08 00:55:41.535655 | orchestrator | =============================================================================== 2026-04-08 00:55:41.535662 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 73.11s 2026-04-08 00:55:41.535666 | orchestrator | opensearch : Restart opensearch container ------------------------------ 64.12s 2026-04-08 00:55:41.535670 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.88s 2026-04-08 00:55:41.535673 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.77s 2026-04-08 00:55:41.535677 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.69s 2026-04-08 00:55:41.535681 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.61s 2026-04-08 00:55:41.535685 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.61s 2026-04-08 00:55:41.535692 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.43s 2026-04-08 00:55:41.535698 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.31s 2026-04-08 00:55:41.535704 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.18s 2026-04-08 00:55:41.535709 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.09s 2026-04-08 00:55:41.535715 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.62s 2026-04-08 00:55:41.535721 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.30s 2026-04-08 00:55:41.535727 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.04s 2026-04-08 00:55:41.535737 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.79s 2026-04-08 00:55:41.535743 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.70s 2026-04-08 00:55:41.535750 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.66s 2026-04-08 00:55:41.535757 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-04-08 00:55:41.535766 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2026-04-08 00:55:41.535772 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2026-04-08 00:55:41.535864 | orchestrator | 2026-04-08 00:55:41 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:41.538229 | orchestrator | 2026-04-08 00:55:41 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:55:41.538319 | orchestrator | 2026-04-08 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:44.582006 | orchestrator | 2026-04-08 00:55:44 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:44.582133 | orchestrator | 2026-04-08 00:55:44 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state STARTED 2026-04-08 00:55:44.582140 | orchestrator | 2026-04-08 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:47.627393 | orchestrator | 2026-04-08 00:55:47 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:47.631563 | orchestrator | 2026-04-08 00:55:47 | INFO  | Task 70eb73db-4e37-47ea-851e-8230056c9328 is in state SUCCESS 2026-04-08 00:55:47.632676 | orchestrator | 2026-04-08 00:55:47.632748 | orchestrator | 2026-04-08 00:55:47.632758 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-08 00:55:47.632768 | orchestrator | 2026-04-08 00:55:47.632775 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-08 00:55:47.632782 | orchestrator | Wednesday 08 April 2026 00:52:51 +0000 (0:00:00.090) 0:00:00.090 ******* 2026-04-08 00:55:47.632838 | orchestrator | ok: [localhost] => { 2026-04-08 00:55:47.632848 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-08 00:55:47.632855 | orchestrator | } 2026-04-08 00:55:47.632863 | orchestrator | 2026-04-08 00:55:47.632869 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-08 00:55:47.632900 | orchestrator | Wednesday 08 April 2026 00:52:51 +0000 (0:00:00.053) 0:00:00.143 ******* 2026-04-08 00:55:47.632906 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-08 00:55:47.632914 | orchestrator | ...ignoring 2026-04-08 00:55:47.632919 | orchestrator | 2026-04-08 00:55:47.632923 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-08 00:55:47.632927 | orchestrator | Wednesday 08 April 2026 00:52:54 +0000 (0:00:02.800) 0:00:02.943 ******* 2026-04-08 00:55:47.632930 | orchestrator | skipping: [localhost] 2026-04-08 00:55:47.632934 | orchestrator | 2026-04-08 00:55:47.632938 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-08 00:55:47.632942 | orchestrator | Wednesday 08 April 2026 00:52:54 +0000 (0:00:00.054) 0:00:02.998 ******* 2026-04-08 00:55:47.632946 | orchestrator | ok: [localhost] 2026-04-08 00:55:47.632951 | orchestrator | 2026-04-08 00:55:47.632958 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:55:47.633076 | orchestrator | 2026-04-08 00:55:47.633083 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:55:47.633090 | orchestrator | Wednesday 08 April 2026 00:52:55 +0000 (0:00:00.182) 0:00:03.180 ******* 2026-04-08 00:55:47.633096 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:55:47.633103 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:55:47.633109 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:55:47.633116 | orchestrator | 2026-04-08 00:55:47.633123 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:55:47.633129 | orchestrator | Wednesday 08 April 2026 00:52:55 +0000 (0:00:00.307) 0:00:03.488 ******* 2026-04-08 00:55:47.633136 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-08 00:55:47.633143 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-08 00:55:47.633149 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-08 00:55:47.633156 | orchestrator | 2026-04-08 00:55:47.633160 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-08 00:55:47.633164 | orchestrator | 2026-04-08 00:55:47.633168 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-08 00:55:47.633171 | orchestrator | Wednesday 08 April 2026 00:52:55 +0000 (0:00:00.385) 0:00:03.873 ******* 2026-04-08 00:55:47.633175 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-08 00:55:47.633179 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-08 00:55:47.633183 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-08 00:55:47.633186 | orchestrator | 2026-04-08 00:55:47.633190 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-08 00:55:47.633194 | orchestrator | Wednesday 08 April 2026 00:52:56 +0000 (0:00:00.348) 0:00:04.221 ******* 2026-04-08 00:55:47.633198 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:55:47.633202 | orchestrator | 2026-04-08 00:55:47.633205 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-08 00:55:47.633209 | orchestrator | Wednesday 08 April 2026 00:52:56 +0000 (0:00:00.753) 0:00:04.975 ******* 2026-04-08 00:55:47.633301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:55:47.633329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:55:47.633344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:55:47.633357 | orchestrator | 2026-04-08 00:55:47.633373 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-08 00:55:47.633380 | orchestrator | Wednesday 08 April 2026 00:52:59 +0000 (0:00:02.981) 0:00:07.957 ******* 2026-04-08 00:55:47.633387 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.633393 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.633397 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:47.633401 | orchestrator | 2026-04-08 00:55:47.633405 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-08 00:55:47.633409 | orchestrator | Wednesday 08 April 2026 00:53:00 +0000 (0:00:00.626) 0:00:08.583 ******* 2026-04-08 00:55:47.633412 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.633416 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.633420 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:47.633424 | orchestrator | 2026-04-08 00:55:47.633427 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-08 00:55:47.633431 | orchestrator | Wednesday 08 April 2026 00:53:01 +0000 (0:00:01.287) 0:00:09.871 ******* 2026-04-08 00:55:47.633435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:55:47.633447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:55:47.633461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:55:47.633465 | orchestrator | 2026-04-08 00:55:47.633469 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-08 00:55:47.633473 | orchestrator | Wednesday 08 April 2026 00:53:05 +0000 (0:00:03.596) 0:00:13.468 ******* 2026-04-08 00:55:47.633477 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.633480 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.633484 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:47.633488 | orchestrator | 2026-04-08 00:55:47.633491 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-08 00:55:47.633495 | orchestrator | Wednesday 08 April 2026 00:53:06 +0000 (0:00:01.048) 0:00:14.516 ******* 2026-04-08 00:55:47.633499 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:55:47.633506 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:47.633510 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:55:47.633514 | orchestrator | 2026-04-08 00:55:47.633517 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-08 00:55:47.633521 | orchestrator | Wednesday 08 April 2026 00:53:10 +0000 (0:00:04.091) 0:00:18.607 ******* 2026-04-08 00:55:47.633527 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:55:47.633531 | orchestrator | 2026-04-08 00:55:47.633535 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-08 00:55:47.633539 | orchestrator | Wednesday 08 April 2026 00:53:11 +0000 (0:00:00.593) 0:00:19.201 ******* 2026-04-08 00:55:47.633547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:55:47.633552 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:47.633556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:55:47.633563 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.633574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:55:47.633578 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.633582 | orchestrator | 2026-04-08 00:55:47.633586 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-08 00:55:47.633590 | orchestrator | Wednesday 08 April 2026 00:53:14 +0000 (0:00:03.598) 0:00:22.799 ******* 2026-04-08 00:55:47.633594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:55:47.633601 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.633610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:55:47.633614 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.633618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:55:47.633625 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:47.633629 | orchestrator | 2026-04-08 00:55:47.633633 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-08 00:55:47.633637 | orchestrator | Wednesday 08 April 2026 00:53:17 +0000 (0:00:02.718) 0:00:25.518 ******* 2026-04-08 00:55:47.633643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:55:47.633648 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:47.633655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:55:47.633662 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.633668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:55:47.633672 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.633676 | orchestrator | 2026-04-08 00:55:47.633680 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-08 00:55:47.633684 | orchestrator | Wednesday 08 April 2026 00:53:20 +0000 (0:00:02.865) 0:00:28.384 ******* 2026-04-08 00:55:47.633692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:55:47.633703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:55:47.633711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:55:47.633717 | orchestrator | 2026-04-08 00:55:47.633721 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-08 00:55:47.633725 | orchestrator | Wednesday 08 April 2026 00:53:24 +0000 (0:00:03.909) 0:00:32.293 ******* 2026-04-08 00:55:47.633733 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:47.633737 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:55:47.633742 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:55:47.633746 | orchestrator | 2026-04-08 00:55:47.633750 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-08 00:55:47.633754 | orchestrator | Wednesday 08 April 2026 00:53:25 +0000 (0:00:00.910) 0:00:33.204 ******* 2026-04-08 00:55:47.633758 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:55:47.633763 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:55:47.633767 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:55:47.633771 | orchestrator | 2026-04-08 00:55:47.633776 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-08 00:55:47.633780 | orchestrator | Wednesday 08 April 2026 00:53:25 +0000 (0:00:00.364) 0:00:33.568 ******* 2026-04-08 00:55:47.633784 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:55:47.633789 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:55:47.633793 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:55:47.633797 | orchestrator | 2026-04-08 00:55:47.633802 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-08 00:55:47.633806 | orchestrator | Wednesday 08 April 2026 00:53:25 +0000 (0:00:00.403) 0:00:33.972 ******* 2026-04-08 00:55:47.633811 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-08 00:55:47.633816 | orchestrator | ...ignoring 2026-04-08 00:55:47.633820 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-08 00:55:47.633825 | orchestrator | ...ignoring 2026-04-08 00:55:47.633830 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-08 00:55:47.633834 | orchestrator | ...ignoring 2026-04-08 00:55:47.633838 | orchestrator | 2026-04-08 00:55:47.633841 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-08 00:55:47.633845 | orchestrator | Wednesday 08 April 2026 00:53:37 +0000 (0:00:11.211) 0:00:45.184 ******* 2026-04-08 00:55:47.633851 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:55:47.633855 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:55:47.633859 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:55:47.633863 | orchestrator | 2026-04-08 00:55:47.633866 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-08 00:55:47.633870 | orchestrator | Wednesday 08 April 2026 00:53:37 +0000 (0:00:00.469) 0:00:45.653 ******* 2026-04-08 00:55:47.633874 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:47.633878 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.633881 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.633885 | orchestrator | 2026-04-08 00:55:47.633889 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-08 00:55:47.633893 | orchestrator | Wednesday 08 April 2026 00:53:37 +0000 (0:00:00.491) 0:00:46.145 ******* 2026-04-08 00:55:47.633896 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:47.633901 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.633907 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.633913 | orchestrator | 2026-04-08 00:55:47.633919 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-08 00:55:47.633925 | orchestrator | Wednesday 08 April 2026 00:53:38 +0000 (0:00:00.455) 0:00:46.601 ******* 2026-04-08 00:55:47.633931 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:47.633937 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.633942 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.633949 | orchestrator | 2026-04-08 00:55:47.633954 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-08 00:55:47.633960 | orchestrator | Wednesday 08 April 2026 00:53:39 +0000 (0:00:00.939) 0:00:47.541 ******* 2026-04-08 00:55:47.633966 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:55:47.633980 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:55:47.633986 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:55:47.633993 | orchestrator | 2026-04-08 00:55:47.634000 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-08 00:55:47.634008 | orchestrator | Wednesday 08 April 2026 00:53:39 +0000 (0:00:00.424) 0:00:47.965 ******* 2026-04-08 00:55:47.634066 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:47.634072 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.634076 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.634079 | orchestrator | 2026-04-08 00:55:47.634083 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-08 00:55:47.634087 | orchestrator | Wednesday 08 April 2026 00:53:40 +0000 (0:00:00.388) 0:00:48.354 ******* 2026-04-08 00:55:47.634091 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.634095 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.634099 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-08 00:55:47.634102 | orchestrator | 2026-04-08 00:55:47.634106 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-08 00:55:47.634110 | orchestrator | Wednesday 08 April 2026 00:53:40 +0000 (0:00:00.406) 0:00:48.761 ******* 2026-04-08 00:55:47.634114 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:47.634118 | orchestrator | 2026-04-08 00:55:47.634121 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-08 00:55:47.634125 | orchestrator | Wednesday 08 April 2026 00:53:51 +0000 (0:00:10.770) 0:00:59.531 ******* 2026-04-08 00:55:47.634129 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:55:47.634132 | orchestrator | 2026-04-08 00:55:47.634136 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-08 00:55:47.634140 | orchestrator | Wednesday 08 April 2026 00:53:51 +0000 (0:00:00.300) 0:00:59.832 ******* 2026-04-08 00:55:47.634144 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:47.634147 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.634151 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.634155 | orchestrator | 2026-04-08 00:55:47.634158 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-08 00:55:47.634162 | orchestrator | Wednesday 08 April 2026 00:53:52 +0000 (0:00:00.770) 0:01:00.602 ******* 2026-04-08 00:55:47.634166 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:47.634170 | orchestrator | 2026-04-08 00:55:47.634173 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-08 00:55:47.634177 | orchestrator | Wednesday 08 April 2026 00:53:59 +0000 (0:00:07.525) 0:01:08.128 ******* 2026-04-08 00:55:47.634181 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:55:47.634184 | orchestrator | 2026-04-08 00:55:47.634188 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-08 00:55:47.634192 | orchestrator | Wednesday 08 April 2026 00:54:02 +0000 (0:00:02.543) 0:01:10.672 ******* 2026-04-08 00:55:47.634196 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:55:47.634199 | orchestrator | 2026-04-08 00:55:47.634203 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-08 00:55:47.634207 | orchestrator | Wednesday 08 April 2026 00:54:05 +0000 (0:00:02.680) 0:01:13.352 ******* 2026-04-08 00:55:47.634211 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:47.634214 | orchestrator | 2026-04-08 00:55:47.634218 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-08 00:55:47.634222 | orchestrator | Wednesday 08 April 2026 00:54:05 +0000 (0:00:00.132) 0:01:13.485 ******* 2026-04-08 00:55:47.634225 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:47.634229 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.634249 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.634253 | orchestrator | 2026-04-08 00:55:47.634256 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-08 00:55:47.634260 | orchestrator | Wednesday 08 April 2026 00:54:05 +0000 (0:00:00.323) 0:01:13.809 ******* 2026-04-08 00:55:47.634269 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:47.634273 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:55:47.634276 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:55:47.634280 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-08 00:55:47.634284 | orchestrator | 2026-04-08 00:55:47.634288 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-08 00:55:47.634291 | orchestrator | skipping: no hosts matched 2026-04-08 00:55:47.634295 | orchestrator | 2026-04-08 00:55:47.634299 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-08 00:55:47.634302 | orchestrator | 2026-04-08 00:55:47.634310 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-08 00:55:47.634314 | orchestrator | Wednesday 08 April 2026 00:54:05 +0000 (0:00:00.329) 0:01:14.138 ******* 2026-04-08 00:55:47.634318 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:55:47.634322 | orchestrator | 2026-04-08 00:55:47.634325 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-08 00:55:47.634329 | orchestrator | Wednesday 08 April 2026 00:54:22 +0000 (0:00:16.398) 0:01:30.537 ******* 2026-04-08 00:55:47.634333 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:55:47.634337 | orchestrator | 2026-04-08 00:55:47.634340 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-08 00:55:47.634344 | orchestrator | Wednesday 08 April 2026 00:54:38 +0000 (0:00:15.633) 0:01:46.170 ******* 2026-04-08 00:55:47.634348 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:55:47.634352 | orchestrator | 2026-04-08 00:55:47.634355 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-08 00:55:47.634359 | orchestrator | 2026-04-08 00:55:47.634363 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-08 00:55:47.634366 | orchestrator | Wednesday 08 April 2026 00:54:40 +0000 (0:00:02.495) 0:01:48.665 ******* 2026-04-08 00:55:47.634370 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:55:47.634374 | orchestrator | 2026-04-08 00:55:47.634378 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-08 00:55:47.634381 | orchestrator | Wednesday 08 April 2026 00:54:57 +0000 (0:00:16.832) 0:02:05.498 ******* 2026-04-08 00:55:47.634385 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:55:47.634389 | orchestrator | 2026-04-08 00:55:47.634392 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-08 00:55:47.634396 | orchestrator | Wednesday 08 April 2026 00:55:13 +0000 (0:00:15.928) 0:02:21.427 ******* 2026-04-08 00:55:47.634400 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:55:47.634403 | orchestrator | 2026-04-08 00:55:47.634407 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-08 00:55:47.634411 | orchestrator | 2026-04-08 00:55:47.634417 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-08 00:55:47.634421 | orchestrator | Wednesday 08 April 2026 00:55:15 +0000 (0:00:02.421) 0:02:23.848 ******* 2026-04-08 00:55:47.634425 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:47.634429 | orchestrator | 2026-04-08 00:55:47.634432 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-08 00:55:47.634436 | orchestrator | Wednesday 08 April 2026 00:55:27 +0000 (0:00:11.579) 0:02:35.428 ******* 2026-04-08 00:55:47.634440 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:55:47.634443 | orchestrator | 2026-04-08 00:55:47.634447 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-08 00:55:47.634451 | orchestrator | Wednesday 08 April 2026 00:55:31 +0000 (0:00:04.559) 0:02:39.988 ******* 2026-04-08 00:55:47.634455 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:55:47.634458 | orchestrator | 2026-04-08 00:55:47.634462 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-08 00:55:47.634466 | orchestrator | 2026-04-08 00:55:47.634471 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-08 00:55:47.634481 | orchestrator | Wednesday 08 April 2026 00:55:34 +0000 (0:00:02.405) 0:02:42.393 ******* 2026-04-08 00:55:47.634487 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:55:47.634493 | orchestrator | 2026-04-08 00:55:47.634499 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-08 00:55:47.634505 | orchestrator | Wednesday 08 April 2026 00:55:34 +0000 (0:00:00.640) 0:02:43.034 ******* 2026-04-08 00:55:47.634511 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.634517 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.634523 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:47.634530 | orchestrator | 2026-04-08 00:55:47.634536 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-08 00:55:47.634544 | orchestrator | Wednesday 08 April 2026 00:55:37 +0000 (0:00:02.313) 0:02:45.348 ******* 2026-04-08 00:55:47.634553 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.634560 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.634565 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:47.634571 | orchestrator | 2026-04-08 00:55:47.634577 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-08 00:55:47.634584 | orchestrator | Wednesday 08 April 2026 00:55:39 +0000 (0:00:02.177) 0:02:47.525 ******* 2026-04-08 00:55:47.634590 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.634596 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.634602 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:47.634608 | orchestrator | 2026-04-08 00:55:47.634614 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-08 00:55:47.634621 | orchestrator | Wednesday 08 April 2026 00:55:41 +0000 (0:00:02.115) 0:02:49.641 ******* 2026-04-08 00:55:47.634625 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.634629 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.634633 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:55:47.634637 | orchestrator | 2026-04-08 00:55:47.634640 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-08 00:55:47.634644 | orchestrator | Wednesday 08 April 2026 00:55:43 +0000 (0:00:02.245) 0:02:51.886 ******* 2026-04-08 00:55:47.634648 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:55:47.634652 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:55:47.634656 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:55:47.634659 | orchestrator | 2026-04-08 00:55:47.634663 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-08 00:55:47.634667 | orchestrator | Wednesday 08 April 2026 00:55:46 +0000 (0:00:02.738) 0:02:54.625 ******* 2026-04-08 00:55:47.634671 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:55:47.634674 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:55:47.634678 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:55:47.634682 | orchestrator | 2026-04-08 00:55:47.634686 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:55:47.634694 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-08 00:55:47.634698 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-08 00:55:47.634704 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-08 00:55:47.634708 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-08 00:55:47.634711 | orchestrator | 2026-04-08 00:55:47.634715 | orchestrator | 2026-04-08 00:55:47.634719 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:55:47.634723 | orchestrator | Wednesday 08 April 2026 00:55:46 +0000 (0:00:00.219) 0:02:54.844 ******* 2026-04-08 00:55:47.634731 | orchestrator | =============================================================================== 2026-04-08 00:55:47.634735 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.23s 2026-04-08 00:55:47.634739 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.56s 2026-04-08 00:55:47.634742 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.58s 2026-04-08 00:55:47.634746 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.21s 2026-04-08 00:55:47.634750 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.77s 2026-04-08 00:55:47.634754 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.53s 2026-04-08 00:55:47.634760 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.92s 2026-04-08 00:55:47.634764 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.56s 2026-04-08 00:55:47.634768 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.09s 2026-04-08 00:55:47.634772 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.91s 2026-04-08 00:55:47.634776 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.60s 2026-04-08 00:55:47.634779 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.60s 2026-04-08 00:55:47.634783 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.98s 2026-04-08 00:55:47.634787 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.87s 2026-04-08 00:55:47.634791 | orchestrator | Check MariaDB service --------------------------------------------------- 2.80s 2026-04-08 00:55:47.634794 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.74s 2026-04-08 00:55:47.634798 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.72s 2026-04-08 00:55:47.634802 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.68s 2026-04-08 00:55:47.634806 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.54s 2026-04-08 00:55:47.634809 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.41s 2026-04-08 00:55:47.634813 | orchestrator | 2026-04-08 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:50.688031 | orchestrator | 2026-04-08 00:55:50 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:50.688565 | orchestrator | 2026-04-08 00:55:50 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:55:50.690823 | orchestrator | 2026-04-08 00:55:50 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:55:50.690866 | orchestrator | 2026-04-08 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:53.725159 | orchestrator | 2026-04-08 00:55:53 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:53.725392 | orchestrator | 2026-04-08 00:55:53 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:55:53.726321 | orchestrator | 2026-04-08 00:55:53 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:55:53.726375 | orchestrator | 2026-04-08 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:56.764066 | orchestrator | 2026-04-08 00:55:56 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:56.764134 | orchestrator | 2026-04-08 00:55:56 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:55:56.765018 | orchestrator | 2026-04-08 00:55:56 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:55:56.765054 | orchestrator | 2026-04-08 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:59.796308 | orchestrator | 2026-04-08 00:55:59 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:55:59.797913 | orchestrator | 2026-04-08 00:55:59 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:55:59.799907 | orchestrator | 2026-04-08 00:55:59 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:55:59.800183 | orchestrator | 2026-04-08 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:02.847061 | orchestrator | 2026-04-08 00:56:02 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:56:02.849772 | orchestrator | 2026-04-08 00:56:02 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:02.850450 | orchestrator | 2026-04-08 00:56:02 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:02.850480 | orchestrator | 2026-04-08 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:05.885055 | orchestrator | 2026-04-08 00:56:05 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:56:05.885844 | orchestrator | 2026-04-08 00:56:05 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:05.887590 | orchestrator | 2026-04-08 00:56:05 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:05.887640 | orchestrator | 2026-04-08 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:08.921903 | orchestrator | 2026-04-08 00:56:08 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:56:08.921976 | orchestrator | 2026-04-08 00:56:08 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:08.926263 | orchestrator | 2026-04-08 00:56:08 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:08.926330 | orchestrator | 2026-04-08 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:11.965216 | orchestrator | 2026-04-08 00:56:11 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:56:11.968247 | orchestrator | 2026-04-08 00:56:11 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:11.968293 | orchestrator | 2026-04-08 00:56:11 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:11.968298 | orchestrator | 2026-04-08 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:15.001133 | orchestrator | 2026-04-08 00:56:15 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:56:15.001262 | orchestrator | 2026-04-08 00:56:15 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:15.002757 | orchestrator | 2026-04-08 00:56:15 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:15.002816 | orchestrator | 2026-04-08 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:18.028817 | orchestrator | 2026-04-08 00:56:18 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:56:18.029345 | orchestrator | 2026-04-08 00:56:18 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:18.030069 | orchestrator | 2026-04-08 00:56:18 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:18.030103 | orchestrator | 2026-04-08 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:21.068446 | orchestrator | 2026-04-08 00:56:21 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:56:21.069136 | orchestrator | 2026-04-08 00:56:21 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:21.070511 | orchestrator | 2026-04-08 00:56:21 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:21.070556 | orchestrator | 2026-04-08 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:24.107074 | orchestrator | 2026-04-08 00:56:24 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:56:24.108557 | orchestrator | 2026-04-08 00:56:24 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:24.109756 | orchestrator | 2026-04-08 00:56:24 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:24.109807 | orchestrator | 2026-04-08 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:27.147597 | orchestrator | 2026-04-08 00:56:27 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:56:27.148903 | orchestrator | 2026-04-08 00:56:27 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:27.149772 | orchestrator | 2026-04-08 00:56:27 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:27.149820 | orchestrator | 2026-04-08 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:30.182712 | orchestrator | 2026-04-08 00:56:30 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state STARTED 2026-04-08 00:56:30.184466 | orchestrator | 2026-04-08 00:56:30 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:30.186444 | orchestrator | 2026-04-08 00:56:30 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:30.186521 | orchestrator | 2026-04-08 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:33.227037 | orchestrator | 2026-04-08 00:56:33 | INFO  | Task 8274896d-67e9-4f19-b42c-ca0fc6d8f426 is in state SUCCESS 2026-04-08 00:56:33.227991 | orchestrator | 2026-04-08 00:56:33.228034 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-08 00:56:33.228042 | orchestrator | 2.16.14 2026-04-08 00:56:33.228048 | orchestrator | 2026-04-08 00:56:33.228054 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-08 00:56:33.228061 | orchestrator | 2026-04-08 00:56:33.228067 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-08 00:56:33.228073 | orchestrator | Wednesday 08 April 2026 00:54:41 +0000 (0:00:00.581) 0:00:00.581 ******* 2026-04-08 00:56:33.228079 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:56:33.228085 | orchestrator | 2026-04-08 00:56:33.228091 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-08 00:56:33.228096 | orchestrator | Wednesday 08 April 2026 00:54:42 +0000 (0:00:00.631) 0:00:01.213 ******* 2026-04-08 00:56:33.228102 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:56:33.228107 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:56:33.228113 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:56:33.228119 | orchestrator | 2026-04-08 00:56:33.228128 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-08 00:56:33.228136 | orchestrator | Wednesday 08 April 2026 00:54:43 +0000 (0:00:00.980) 0:00:02.193 ******* 2026-04-08 00:56:33.228149 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:56:33.228159 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:56:33.228169 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:56:33.228177 | orchestrator | 2026-04-08 00:56:33.228186 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-08 00:56:33.228263 | orchestrator | Wednesday 08 April 2026 00:54:43 +0000 (0:00:00.267) 0:00:02.461 ******* 2026-04-08 00:56:33.228274 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:56:33.228295 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:56:33.228524 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:56:33.228530 | orchestrator | 2026-04-08 00:56:33.228837 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-08 00:56:33.228844 | orchestrator | Wednesday 08 April 2026 00:54:44 +0000 (0:00:00.767) 0:00:03.229 ******* 2026-04-08 00:56:33.228850 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:56:33.228855 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:56:33.228860 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:56:33.228866 | orchestrator | 2026-04-08 00:56:33.228872 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-08 00:56:33.228877 | orchestrator | Wednesday 08 April 2026 00:54:44 +0000 (0:00:00.302) 0:00:03.531 ******* 2026-04-08 00:56:33.228883 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:56:33.228889 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:56:33.228894 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:56:33.228899 | orchestrator | 2026-04-08 00:56:33.228905 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-08 00:56:33.228910 | orchestrator | Wednesday 08 April 2026 00:54:45 +0000 (0:00:00.307) 0:00:03.839 ******* 2026-04-08 00:56:33.228916 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:56:33.228921 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:56:33.228927 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:56:33.228932 | orchestrator | 2026-04-08 00:56:33.228938 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-08 00:56:33.228943 | orchestrator | Wednesday 08 April 2026 00:54:45 +0000 (0:00:00.289) 0:00:04.129 ******* 2026-04-08 00:56:33.228950 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.228962 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.228970 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.228984 | orchestrator | 2026-04-08 00:56:33.228992 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-08 00:56:33.229000 | orchestrator | Wednesday 08 April 2026 00:54:45 +0000 (0:00:00.408) 0:00:04.537 ******* 2026-04-08 00:56:33.229009 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:56:33.229017 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:56:33.229025 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:56:33.229033 | orchestrator | 2026-04-08 00:56:33.229041 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-08 00:56:33.229049 | orchestrator | Wednesday 08 April 2026 00:54:46 +0000 (0:00:00.255) 0:00:04.793 ******* 2026-04-08 00:56:33.229056 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-08 00:56:33.229064 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:56:33.229073 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:56:33.229082 | orchestrator | 2026-04-08 00:56:33.229090 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-08 00:56:33.229099 | orchestrator | Wednesday 08 April 2026 00:54:46 +0000 (0:00:00.581) 0:00:05.375 ******* 2026-04-08 00:56:33.229108 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:56:33.229114 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:56:33.229119 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:56:33.229124 | orchestrator | 2026-04-08 00:56:33.229141 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-08 00:56:33.229147 | orchestrator | Wednesday 08 April 2026 00:54:47 +0000 (0:00:00.391) 0:00:05.766 ******* 2026-04-08 00:56:33.229153 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-08 00:56:33.229158 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:56:33.229163 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:56:33.229178 | orchestrator | 2026-04-08 00:56:33.229184 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-08 00:56:33.229189 | orchestrator | Wednesday 08 April 2026 00:54:49 +0000 (0:00:02.815) 0:00:08.582 ******* 2026-04-08 00:56:33.229195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-08 00:56:33.229201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-08 00:56:33.229224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-08 00:56:33.229230 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.229235 | orchestrator | 2026-04-08 00:56:33.229272 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-08 00:56:33.229279 | orchestrator | Wednesday 08 April 2026 00:54:50 +0000 (0:00:00.451) 0:00:09.034 ******* 2026-04-08 00:56:33.229287 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.229295 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.229301 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.229307 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.229312 | orchestrator | 2026-04-08 00:56:33.229317 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-08 00:56:33.229323 | orchestrator | Wednesday 08 April 2026 00:54:51 +0000 (0:00:00.709) 0:00:09.743 ******* 2026-04-08 00:56:33.229330 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.229338 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.229344 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.229350 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.229355 | orchestrator | 2026-04-08 00:56:33.229361 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-08 00:56:33.229366 | orchestrator | Wednesday 08 April 2026 00:54:51 +0000 (0:00:00.137) 0:00:09.880 ******* 2026-04-08 00:56:33.229378 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ca3b266c2e35', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-08 00:54:47.987429', 'end': '2026-04-08 00:54:48.026396', 'delta': '0:00:00.038967', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ca3b266c2e35'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-08 00:56:33.229391 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a630e307f239', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-08 00:54:48.898722', 'end': '2026-04-08 00:54:48.938599', 'delta': '0:00:00.039877', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a630e307f239'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-08 00:56:33.229415 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd23490622b0e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-08 00:54:49.756736', 'end': '2026-04-08 00:54:49.792596', 'delta': '0:00:00.035860', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d23490622b0e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-08 00:56:33.229421 | orchestrator | 2026-04-08 00:56:33.229427 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-08 00:56:33.229432 | orchestrator | Wednesday 08 April 2026 00:54:51 +0000 (0:00:00.295) 0:00:10.176 ******* 2026-04-08 00:56:33.229438 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:56:33.229443 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:56:33.229449 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:56:33.229454 | orchestrator | 2026-04-08 00:56:33.229459 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-08 00:56:33.229465 | orchestrator | Wednesday 08 April 2026 00:54:51 +0000 (0:00:00.371) 0:00:10.547 ******* 2026-04-08 00:56:33.229470 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-08 00:56:33.229477 | orchestrator | 2026-04-08 00:56:33.229483 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-08 00:56:33.229490 | orchestrator | Wednesday 08 April 2026 00:54:53 +0000 (0:00:01.407) 0:00:11.954 ******* 2026-04-08 00:56:33.229496 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.229503 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.229509 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.229515 | orchestrator | 2026-04-08 00:56:33.229521 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-08 00:56:33.229527 | orchestrator | Wednesday 08 April 2026 00:54:53 +0000 (0:00:00.271) 0:00:12.226 ******* 2026-04-08 00:56:33.229534 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.229540 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.229547 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.229553 | orchestrator | 2026-04-08 00:56:33.229559 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-08 00:56:33.229566 | orchestrator | Wednesday 08 April 2026 00:54:53 +0000 (0:00:00.402) 0:00:12.628 ******* 2026-04-08 00:56:33.229573 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.229579 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.229586 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.229596 | orchestrator | 2026-04-08 00:56:33.229603 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-08 00:56:33.229608 | orchestrator | Wednesday 08 April 2026 00:54:54 +0000 (0:00:00.461) 0:00:13.090 ******* 2026-04-08 00:56:33.229614 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:56:33.229619 | orchestrator | 2026-04-08 00:56:33.229624 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-08 00:56:33.229630 | orchestrator | Wednesday 08 April 2026 00:54:54 +0000 (0:00:00.137) 0:00:13.228 ******* 2026-04-08 00:56:33.229635 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.229640 | orchestrator | 2026-04-08 00:56:33.229646 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-08 00:56:33.229651 | orchestrator | Wednesday 08 April 2026 00:54:54 +0000 (0:00:00.222) 0:00:13.451 ******* 2026-04-08 00:56:33.229656 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.229662 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.229667 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.229672 | orchestrator | 2026-04-08 00:56:33.229678 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-08 00:56:33.229683 | orchestrator | Wednesday 08 April 2026 00:54:55 +0000 (0:00:00.287) 0:00:13.738 ******* 2026-04-08 00:56:33.229688 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.229694 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.229699 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.229704 | orchestrator | 2026-04-08 00:56:33.229710 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-08 00:56:33.229715 | orchestrator | Wednesday 08 April 2026 00:54:55 +0000 (0:00:00.316) 0:00:14.054 ******* 2026-04-08 00:56:33.229721 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.229726 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.229731 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.229737 | orchestrator | 2026-04-08 00:56:33.229745 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-08 00:56:33.229751 | orchestrator | Wednesday 08 April 2026 00:54:55 +0000 (0:00:00.504) 0:00:14.559 ******* 2026-04-08 00:56:33.229756 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.229761 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.229767 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.229772 | orchestrator | 2026-04-08 00:56:33.229777 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-08 00:56:33.229783 | orchestrator | Wednesday 08 April 2026 00:54:56 +0000 (0:00:00.313) 0:00:14.872 ******* 2026-04-08 00:56:33.229788 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.229794 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.229799 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.229804 | orchestrator | 2026-04-08 00:56:33.229810 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-08 00:56:33.229815 | orchestrator | Wednesday 08 April 2026 00:54:56 +0000 (0:00:00.309) 0:00:15.181 ******* 2026-04-08 00:56:33.229820 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.229826 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.229831 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.229852 | orchestrator | 2026-04-08 00:56:33.229859 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-08 00:56:33.229864 | orchestrator | Wednesday 08 April 2026 00:54:56 +0000 (0:00:00.307) 0:00:15.489 ******* 2026-04-08 00:56:33.229870 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.229875 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.229881 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.229886 | orchestrator | 2026-04-08 00:56:33.229891 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-08 00:56:33.229897 | orchestrator | Wednesday 08 April 2026 00:54:57 +0000 (0:00:00.536) 0:00:16.026 ******* 2026-04-08 00:56:33.229903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bf49c8a6--5f7f--52ec--8321--922f51127285-osd--block--bf49c8a6--5f7f--52ec--8321--922f51127285', 'dm-uuid-LVM-DCtP4WqFyDlImNS25WUpBspIXbQ4b0MseJNdmaqBSWhvH3Znhvwkh6UD8M5v6au3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.229914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--42db71c5--e51d--540c--8fbe--0cd4e432c3d3-osd--block--42db71c5--e51d--540c--8fbe--0cd4e432c3d3', 'dm-uuid-LVM-BAlq3j3YZdEKD1c9X4cS0qsBF7TBXnmKdS3aHAqRkuDb5fBHAv2rWwtm6NolRKSw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.229919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.229925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.229931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.229940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.229945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.229967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.229974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.229983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.229992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part1', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part14', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part15', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part16', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:56:33.230003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bf49c8a6--5f7f--52ec--8321--922f51127285-osd--block--bf49c8a6--5f7f--52ec--8321--922f51127285'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-82M0gD-twSo-xX2e-GnNF-pWTz-pR9F-4A2iHp', 'scsi-0QEMU_QEMU_HARDDISK_d0f6de66-4fec-4fd7-97e2-1741dd54f232', 'scsi-SQEMU_QEMU_HARDDISK_d0f6de66-4fec-4fd7-97e2-1741dd54f232'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:56:33.230068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--42db71c5--e51d--540c--8fbe--0cd4e432c3d3-osd--block--42db71c5--e51d--540c--8fbe--0cd4e432c3d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xrVDS7-YlYj-c8pa-CWN3-w6TN-zGlL-9Yq4AT', 'scsi-0QEMU_QEMU_HARDDISK_7b23824a-491e-4dc1-9823-22fa2ac48d76', 'scsi-SQEMU_QEMU_HARDDISK_7b23824a-491e-4dc1-9823-22fa2ac48d76'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:56:33.230081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8171b98-d766-41eb-84f8-e0c6f3fec117', 'scsi-SQEMU_QEMU_HARDDISK_a8171b98-d766-41eb-84f8-e0c6f3fec117'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:56:33.230088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--31d7fbda--737c--5413--835b--7dea8c782162-osd--block--31d7fbda--737c--5413--835b--7dea8c782162', 'dm-uuid-LVM-6l4kJSOv0R94h2yRg4PqmHo3vUKfeSF5I1LaWdvSHWsEWdizfAL30P0VjYcyBq5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:56:33.230100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6d74f3d8--bff6--5917--9df4--f8420d533035-osd--block--6d74f3d8--bff6--5917--9df4--f8420d533035', 'dm-uuid-LVM-4l8XNG7D4K7HeOCdF199MCfOBuuofcWRFyfQpVHgpdArkKYJbUiWvAU03VsAlqZ2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230115 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230121 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.230144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230184 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2a42094--2be0--50d9--ab62--bd2425088ba2-osd--block--d2a42094--2be0--50d9--ab62--bd2425088ba2', 'dm-uuid-LVM-4dOidnlTm9bAFU1bQbvhIfmV07E14tCv27YsQyeErGXnbtNwmdHoqxbHs0BmwtP4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part1', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part14', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part15', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part16', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:56:33.230237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--31d7fbda--737c--5413--835b--7dea8c782162-osd--block--31d7fbda--737c--5413--835b--7dea8c782162'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uNimtB-lzSd-iWW4-fWZp-LNYc-tskR-lU4ln0', 'scsi-0QEMU_QEMU_HARDDISK_706accd8-4e49-4054-bb21-fde08475a707', 'scsi-SQEMU_QEMU_HARDDISK_706accd8-4e49-4054-bb21-fde08475a707'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:56:33.230244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed835e4d--3c58--59bb--af9d--6d23bfbc2494-osd--block--ed835e4d--3c58--59bb--af9d--6d23bfbc2494', 'dm-uuid-LVM-ZOWwtAXmXVeZGdA6c4d19phCtA4iFWHEWLP3dDLMb4oHu8JWx5caD1wehFycts3r'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6d74f3d8--bff6--5917--9df4--f8420d533035-osd--block--6d74f3d8--bff6--5917--9df4--f8420d533035'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-33A23c-Engk-9daO-jSPu-Tl11-ekzq-Jb8fW0', 'scsi-0QEMU_QEMU_HARDDISK_f8a75de5-2ee8-4f26-b825-06a074879466', 'scsi-SQEMU_QEMU_HARDDISK_f8a75de5-2ee8-4f26-b825-06a074879466'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:56:33.230259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c872331-8a67-44e1-93cf-3b447520d047', 'scsi-SQEMU_QEMU_HARDDISK_5c872331-8a67-44e1-93cf-3b447520d047'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:56:33.230280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:56:33.230292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230298 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.230303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:56:33.230362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part1', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part14', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part15', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part16', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:56:33.230374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d2a42094--2be0--50d9--ab62--bd2425088ba2-osd--block--d2a42094--2be0--50d9--ab62--bd2425088ba2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YSmTwt-CYiQ-X7jk-JP2g-hMRT-5ooj-Q2UMoO', 'scsi-0QEMU_QEMU_HARDDISK_bf03eb4f-be44-4071-9b80-940b5dcac70f', 'scsi-SQEMU_QEMU_HARDDISK_bf03eb4f-be44-4071-9b80-940b5dcac70f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:56:33.230384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ed835e4d--3c58--59bb--af9d--6d23bfbc2494-osd--block--ed835e4d--3c58--59bb--af9d--6d23bfbc2494'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-72ezyk-TfkG-aAnx-KaAz-mKlf-IuXi-AZeHcs', 'scsi-0QEMU_QEMU_HARDDISK_6d0a5819-af6a-4d5a-b5d8-55d4de9ca567', 'scsi-SQEMU_QEMU_HARDDISK_6d0a5819-af6a-4d5a-b5d8-55d4de9ca567'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:56:33.230397 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0911be4c-6cd6-4ed2-95f2-3749c0002df5', 'scsi-SQEMU_QEMU_HARDDISK_0911be4c-6cd6-4ed2-95f2-3749c0002df5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:56:33.230419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:56:33.230429 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.230438 | orchestrator | 2026-04-08 00:56:33.230446 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-08 00:56:33.230454 | orchestrator | Wednesday 08 April 2026 00:54:57 +0000 (0:00:00.562) 0:00:16.589 ******* 2026-04-08 00:56:33.230464 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bf49c8a6--5f7f--52ec--8321--922f51127285-osd--block--bf49c8a6--5f7f--52ec--8321--922f51127285', 'dm-uuid-LVM-DCtP4WqFyDlImNS25WUpBspIXbQ4b0MseJNdmaqBSWhvH3Znhvwkh6UD8M5v6au3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230474 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--42db71c5--e51d--540c--8fbe--0cd4e432c3d3-osd--block--42db71c5--e51d--540c--8fbe--0cd4e432c3d3', 'dm-uuid-LVM-BAlq3j3YZdEKD1c9X4cS0qsBF7TBXnmKdS3aHAqRkuDb5fBHAv2rWwtm6NolRKSw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230483 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230493 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230515 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230527 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230533 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230539 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230545 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230550 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230563 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--31d7fbda--737c--5413--835b--7dea8c782162-osd--block--31d7fbda--737c--5413--835b--7dea8c782162', 'dm-uuid-LVM-6l4kJSOv0R94h2yRg4PqmHo3vUKfeSF5I1LaWdvSHWsEWdizfAL30P0VjYcyBq5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230574 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6d74f3d8--bff6--5917--9df4--f8420d533035-osd--block--6d74f3d8--bff6--5917--9df4--f8420d533035', 'dm-uuid-LVM-4l8XNG7D4K7HeOCdF199MCfOBuuofcWRFyfQpVHgpdArkKYJbUiWvAU03VsAlqZ2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230581 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part1', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part14', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part15', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part16', 'scsi-SQEMU_QEMU_HARDDISK_5de6439c-8009-46ad-8736-37ced6604b2d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230587 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230604 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bf49c8a6--5f7f--52ec--8321--922f51127285-osd--block--bf49c8a6--5f7f--52ec--8321--922f51127285'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-82M0gD-twSo-xX2e-GnNF-pWTz-pR9F-4A2iHp', 'scsi-0QEMU_QEMU_HARDDISK_d0f6de66-4fec-4fd7-97e2-1741dd54f232', 'scsi-SQEMU_QEMU_HARDDISK_d0f6de66-4fec-4fd7-97e2-1741dd54f232'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230612 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230618 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--42db71c5--e51d--540c--8fbe--0cd4e432c3d3-osd--block--42db71c5--e51d--540c--8fbe--0cd4e432c3d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xrVDS7-YlYj-c8pa-CWN3-w6TN-zGlL-9Yq4AT', 'scsi-0QEMU_QEMU_HARDDISK_7b23824a-491e-4dc1-9823-22fa2ac48d76', 'scsi-SQEMU_QEMU_HARDDISK_7b23824a-491e-4dc1-9823-22fa2ac48d76'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230623 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230629 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8171b98-d766-41eb-84f8-e0c6f3fec117', 'scsi-SQEMU_QEMU_HARDDISK_a8171b98-d766-41eb-84f8-e0c6f3fec117'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230645 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230656 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230662 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.230668 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230674 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230680 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230685 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230704 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part1', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part14', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part15', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part16', 'scsi-SQEMU_QEMU_HARDDISK_5a9b4992-de90-4207-841b-10d280749dda-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230711 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--31d7fbda--737c--5413--835b--7dea8c782162-osd--block--31d7fbda--737c--5413--835b--7dea8c782162'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uNimtB-lzSd-iWW4-fWZp-LNYc-tskR-lU4ln0', 'scsi-0QEMU_QEMU_HARDDISK_706accd8-4e49-4054-bb21-fde08475a707', 'scsi-SQEMU_QEMU_HARDDISK_706accd8-4e49-4054-bb21-fde08475a707'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230717 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2a42094--2be0--50d9--ab62--bd2425088ba2-osd--block--d2a42094--2be0--50d9--ab62--bd2425088ba2', 'dm-uuid-LVM-4dOidnlTm9bAFU1bQbvhIfmV07E14tCv27YsQyeErGXnbtNwmdHoqxbHs0BmwtP4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230729 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6d74f3d8--bff6--5917--9df4--f8420d533035-osd--block--6d74f3d8--bff6--5917--9df4--f8420d533035'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-33A23c-Engk-9daO-jSPu-Tl11-ekzq-Jb8fW0', 'scsi-0QEMU_QEMU_HARDDISK_f8a75de5-2ee8-4f26-b825-06a074879466', 'scsi-SQEMU_QEMU_HARDDISK_f8a75de5-2ee8-4f26-b825-06a074879466'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230740 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed835e4d--3c58--59bb--af9d--6d23bfbc2494-osd--block--ed835e4d--3c58--59bb--af9d--6d23bfbc2494', 'dm-uuid-LVM-ZOWwtAXmXVeZGdA6c4d19phCtA4iFWHEWLP3dDLMb4oHu8JWx5caD1wehFycts3r'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230746 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c872331-8a67-44e1-93cf-3b447520d047', 'scsi-SQEMU_QEMU_HARDDISK_5c872331-8a67-44e1-93cf-3b447520d047'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230752 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230758 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230767 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230773 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.230781 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230792 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230798 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230803 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230819 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230832 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part1', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part14', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part15', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part16', 'scsi-SQEMU_QEMU_HARDDISK_41c9e370-cce9-4a92-aa7a-13c8738045eb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230839 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d2a42094--2be0--50d9--ab62--bd2425088ba2-osd--block--d2a42094--2be0--50d9--ab62--bd2425088ba2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YSmTwt-CYiQ-X7jk-JP2g-hMRT-5ooj-Q2UMoO', 'scsi-0QEMU_QEMU_HARDDISK_bf03eb4f-be44-4071-9b80-940b5dcac70f', 'scsi-SQEMU_QEMU_HARDDISK_bf03eb4f-be44-4071-9b80-940b5dcac70f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230849 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ed835e4d--3c58--59bb--af9d--6d23bfbc2494-osd--block--ed835e4d--3c58--59bb--af9d--6d23bfbc2494'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-72ezyk-TfkG-aAnx-KaAz-mKlf-IuXi-AZeHcs', 'scsi-0QEMU_QEMU_HARDDISK_6d0a5819-af6a-4d5a-b5d8-55d4de9ca567', 'scsi-SQEMU_QEMU_HARDDISK_6d0a5819-af6a-4d5a-b5d8-55d4de9ca567'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230857 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0911be4c-6cd6-4ed2-95f2-3749c0002df5', 'scsi-SQEMU_QEMU_HARDDISK_0911be4c-6cd6-4ed2-95f2-3749c0002df5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230866 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:56:33.230872 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.230878 | orchestrator | 2026-04-08 00:56:33.230883 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-08 00:56:33.230889 | orchestrator | Wednesday 08 April 2026 00:54:58 +0000 (0:00:00.588) 0:00:17.177 ******* 2026-04-08 00:56:33.230895 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:56:33.230900 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:56:33.230906 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:56:33.230911 | orchestrator | 2026-04-08 00:56:33.230917 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-08 00:56:33.230922 | orchestrator | Wednesday 08 April 2026 00:54:59 +0000 (0:00:00.638) 0:00:17.816 ******* 2026-04-08 00:56:33.230928 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:56:33.230933 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:56:33.230939 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:56:33.230944 | orchestrator | 2026-04-08 00:56:33.230950 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-08 00:56:33.230955 | orchestrator | Wednesday 08 April 2026 00:54:59 +0000 (0:00:00.489) 0:00:18.305 ******* 2026-04-08 00:56:33.230961 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:56:33.230966 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:56:33.230972 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:56:33.230980 | orchestrator | 2026-04-08 00:56:33.230986 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-08 00:56:33.230992 | orchestrator | Wednesday 08 April 2026 00:55:00 +0000 (0:00:00.722) 0:00:19.028 ******* 2026-04-08 00:56:33.230997 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.231003 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.231008 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.231013 | orchestrator | 2026-04-08 00:56:33.231019 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-08 00:56:33.231024 | orchestrator | Wednesday 08 April 2026 00:55:00 +0000 (0:00:00.294) 0:00:19.323 ******* 2026-04-08 00:56:33.231030 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.231035 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.231041 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.231046 | orchestrator | 2026-04-08 00:56:33.231051 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-08 00:56:33.231057 | orchestrator | Wednesday 08 April 2026 00:55:01 +0000 (0:00:00.415) 0:00:19.739 ******* 2026-04-08 00:56:33.231062 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.231068 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.231073 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.231079 | orchestrator | 2026-04-08 00:56:33.231084 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-08 00:56:33.231089 | orchestrator | Wednesday 08 April 2026 00:55:01 +0000 (0:00:00.521) 0:00:20.260 ******* 2026-04-08 00:56:33.231095 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-08 00:56:33.231101 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-08 00:56:33.231106 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-08 00:56:33.231112 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-08 00:56:33.231117 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-08 00:56:33.231122 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-08 00:56:33.231128 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-08 00:56:33.231133 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-08 00:56:33.231139 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-08 00:56:33.231144 | orchestrator | 2026-04-08 00:56:33.231150 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-08 00:56:33.231155 | orchestrator | Wednesday 08 April 2026 00:55:02 +0000 (0:00:00.809) 0:00:21.070 ******* 2026-04-08 00:56:33.231160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-08 00:56:33.231166 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-08 00:56:33.231344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-08 00:56:33.231355 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.231360 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-08 00:56:33.231366 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-08 00:56:33.231376 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-08 00:56:33.231382 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.231387 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-08 00:56:33.231392 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-08 00:56:33.231398 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-08 00:56:33.231403 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.231409 | orchestrator | 2026-04-08 00:56:33.231414 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-08 00:56:33.231420 | orchestrator | Wednesday 08 April 2026 00:55:02 +0000 (0:00:00.364) 0:00:21.435 ******* 2026-04-08 00:56:33.231426 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:56:33.231437 | orchestrator | 2026-04-08 00:56:33.231443 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-08 00:56:33.231450 | orchestrator | Wednesday 08 April 2026 00:55:03 +0000 (0:00:00.679) 0:00:22.114 ******* 2026-04-08 00:56:33.231460 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.231465 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.231471 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.231476 | orchestrator | 2026-04-08 00:56:33.231482 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-08 00:56:33.231487 | orchestrator | Wednesday 08 April 2026 00:55:03 +0000 (0:00:00.317) 0:00:22.432 ******* 2026-04-08 00:56:33.231492 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.231498 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.231503 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.231509 | orchestrator | 2026-04-08 00:56:33.231514 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-08 00:56:33.231519 | orchestrator | Wednesday 08 April 2026 00:55:04 +0000 (0:00:00.331) 0:00:22.763 ******* 2026-04-08 00:56:33.231529 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.231537 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.231546 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:56:33.231555 | orchestrator | 2026-04-08 00:56:33.231564 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-08 00:56:33.231572 | orchestrator | Wednesday 08 April 2026 00:55:04 +0000 (0:00:00.296) 0:00:23.060 ******* 2026-04-08 00:56:33.231580 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:56:33.231588 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:56:33.231598 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:56:33.231606 | orchestrator | 2026-04-08 00:56:33.231614 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-08 00:56:33.231623 | orchestrator | Wednesday 08 April 2026 00:55:05 +0000 (0:00:00.584) 0:00:23.645 ******* 2026-04-08 00:56:33.231631 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:56:33.231639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:56:33.231647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:56:33.231655 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.231663 | orchestrator | 2026-04-08 00:56:33.231673 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-08 00:56:33.231682 | orchestrator | Wednesday 08 April 2026 00:55:05 +0000 (0:00:00.371) 0:00:24.016 ******* 2026-04-08 00:56:33.231691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:56:33.231698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:56:33.231703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:56:33.231709 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.231714 | orchestrator | 2026-04-08 00:56:33.231719 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-08 00:56:33.231725 | orchestrator | Wednesday 08 April 2026 00:55:05 +0000 (0:00:00.362) 0:00:24.379 ******* 2026-04-08 00:56:33.231730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:56:33.231736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:56:33.231741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:56:33.231746 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.231752 | orchestrator | 2026-04-08 00:56:33.231757 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-08 00:56:33.231763 | orchestrator | Wednesday 08 April 2026 00:55:06 +0000 (0:00:00.397) 0:00:24.776 ******* 2026-04-08 00:56:33.231768 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:56:33.231773 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:56:33.231779 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:56:33.231784 | orchestrator | 2026-04-08 00:56:33.231794 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-08 00:56:33.231800 | orchestrator | Wednesday 08 April 2026 00:55:06 +0000 (0:00:00.326) 0:00:25.103 ******* 2026-04-08 00:56:33.231805 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-08 00:56:33.231811 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-08 00:56:33.231816 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-08 00:56:33.231821 | orchestrator | 2026-04-08 00:56:33.231827 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-08 00:56:33.231832 | orchestrator | Wednesday 08 April 2026 00:55:06 +0000 (0:00:00.508) 0:00:25.611 ******* 2026-04-08 00:56:33.231837 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-08 00:56:33.231843 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:56:33.231849 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:56:33.231854 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-08 00:56:33.231860 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-08 00:56:33.231872 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-08 00:56:33.231877 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-08 00:56:33.231883 | orchestrator | 2026-04-08 00:56:33.231888 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-08 00:56:33.231893 | orchestrator | Wednesday 08 April 2026 00:55:07 +0000 (0:00:00.956) 0:00:26.568 ******* 2026-04-08 00:56:33.231899 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-08 00:56:33.231904 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:56:33.231910 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:56:33.231915 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-08 00:56:33.231920 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-08 00:56:33.231929 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-08 00:56:33.231946 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-08 00:56:33.231958 | orchestrator | 2026-04-08 00:56:33.231968 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-08 00:56:33.231975 | orchestrator | Wednesday 08 April 2026 00:55:09 +0000 (0:00:01.967) 0:00:28.536 ******* 2026-04-08 00:56:33.231984 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:56:33.231993 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:56:33.232002 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-08 00:56:33.232012 | orchestrator | 2026-04-08 00:56:33.232022 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-08 00:56:33.232031 | orchestrator | Wednesday 08 April 2026 00:55:10 +0000 (0:00:00.392) 0:00:28.928 ******* 2026-04-08 00:56:33.232041 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-08 00:56:33.232052 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-08 00:56:33.232061 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-08 00:56:33.232078 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-08 00:56:33.232087 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-08 00:56:33.232097 | orchestrator | 2026-04-08 00:56:33.232105 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-08 00:56:33.232113 | orchestrator | Wednesday 08 April 2026 00:55:46 +0000 (0:00:35.805) 0:01:04.733 ******* 2026-04-08 00:56:33.232123 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232131 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232137 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232144 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232150 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232156 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232163 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-08 00:56:33.232169 | orchestrator | 2026-04-08 00:56:33.232175 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-08 00:56:33.232181 | orchestrator | Wednesday 08 April 2026 00:56:05 +0000 (0:00:19.035) 0:01:23.768 ******* 2026-04-08 00:56:33.232187 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232194 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232200 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232227 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232239 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232245 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232251 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-08 00:56:33.232257 | orchestrator | 2026-04-08 00:56:33.232263 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-08 00:56:33.232270 | orchestrator | Wednesday 08 April 2026 00:56:14 +0000 (0:00:09.822) 0:01:33.591 ******* 2026-04-08 00:56:33.232276 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232282 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:56:33.232288 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:56:33.232295 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232301 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:56:33.232312 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:56:33.232319 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232325 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:56:33.232331 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:56:33.232342 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232348 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:56:33.232354 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:56:33.232361 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232367 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:56:33.232373 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:56:33.232380 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:56:33.232386 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:56:33.232392 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:56:33.232398 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-08 00:56:33.232404 | orchestrator | 2026-04-08 00:56:33.232409 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:56:33.232415 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-08 00:56:33.232422 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-08 00:56:33.232428 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-08 00:56:33.232433 | orchestrator | 2026-04-08 00:56:33.232438 | orchestrator | 2026-04-08 00:56:33.232444 | orchestrator | 2026-04-08 00:56:33.232449 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:56:33.232455 | orchestrator | Wednesday 08 April 2026 00:56:32 +0000 (0:00:17.678) 0:01:51.270 ******* 2026-04-08 00:56:33.232460 | orchestrator | =============================================================================== 2026-04-08 00:56:33.232465 | orchestrator | create openstack pool(s) ----------------------------------------------- 35.81s 2026-04-08 00:56:33.232470 | orchestrator | generate keys ---------------------------------------------------------- 19.04s 2026-04-08 00:56:33.232476 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.68s 2026-04-08 00:56:33.232481 | orchestrator | get keys from monitors -------------------------------------------------- 9.82s 2026-04-08 00:56:33.232486 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.82s 2026-04-08 00:56:33.232492 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.97s 2026-04-08 00:56:33.232497 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.41s 2026-04-08 00:56:33.232502 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.98s 2026-04-08 00:56:33.232508 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.96s 2026-04-08 00:56:33.232513 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.81s 2026-04-08 00:56:33.232518 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.77s 2026-04-08 00:56:33.232524 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.72s 2026-04-08 00:56:33.232529 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.71s 2026-04-08 00:56:33.232534 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.68s 2026-04-08 00:56:33.232540 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.64s 2026-04-08 00:56:33.232545 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.63s 2026-04-08 00:56:33.232554 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.59s 2026-04-08 00:56:33.232564 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.58s 2026-04-08 00:56:33.232570 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.58s 2026-04-08 00:56:33.232575 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.56s 2026-04-08 00:56:33.232580 | orchestrator | 2026-04-08 00:56:33 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:33.232586 | orchestrator | 2026-04-08 00:56:33 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:33.232591 | orchestrator | 2026-04-08 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:36.274255 | orchestrator | 2026-04-08 00:56:36 | INFO  | Task af7d79bc-de3f-4341-8a76-bd9cd938c05b is in state STARTED 2026-04-08 00:56:36.274549 | orchestrator | 2026-04-08 00:56:36 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:36.275436 | orchestrator | 2026-04-08 00:56:36 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:36.275450 | orchestrator | 2026-04-08 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:39.310991 | orchestrator | 2026-04-08 00:56:39 | INFO  | Task af7d79bc-de3f-4341-8a76-bd9cd938c05b is in state STARTED 2026-04-08 00:56:39.312825 | orchestrator | 2026-04-08 00:56:39 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:39.313907 | orchestrator | 2026-04-08 00:56:39 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:39.314104 | orchestrator | 2026-04-08 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:42.356086 | orchestrator | 2026-04-08 00:56:42 | INFO  | Task af7d79bc-de3f-4341-8a76-bd9cd938c05b is in state STARTED 2026-04-08 00:56:42.357472 | orchestrator | 2026-04-08 00:56:42 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:42.359886 | orchestrator | 2026-04-08 00:56:42 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:42.359991 | orchestrator | 2026-04-08 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:45.406768 | orchestrator | 2026-04-08 00:56:45 | INFO  | Task af7d79bc-de3f-4341-8a76-bd9cd938c05b is in state STARTED 2026-04-08 00:56:45.407924 | orchestrator | 2026-04-08 00:56:45 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:45.411503 | orchestrator | 2026-04-08 00:56:45 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:45.411564 | orchestrator | 2026-04-08 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:48.453590 | orchestrator | 2026-04-08 00:56:48 | INFO  | Task af7d79bc-de3f-4341-8a76-bd9cd938c05b is in state STARTED 2026-04-08 00:56:48.455067 | orchestrator | 2026-04-08 00:56:48 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:48.456609 | orchestrator | 2026-04-08 00:56:48 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:48.456641 | orchestrator | 2026-04-08 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:51.509999 | orchestrator | 2026-04-08 00:56:51 | INFO  | Task af7d79bc-de3f-4341-8a76-bd9cd938c05b is in state STARTED 2026-04-08 00:56:51.511445 | orchestrator | 2026-04-08 00:56:51 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:51.513085 | orchestrator | 2026-04-08 00:56:51 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:51.513136 | orchestrator | 2026-04-08 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:54.557404 | orchestrator | 2026-04-08 00:56:54 | INFO  | Task af7d79bc-de3f-4341-8a76-bd9cd938c05b is in state STARTED 2026-04-08 00:56:54.561224 | orchestrator | 2026-04-08 00:56:54 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:54.563857 | orchestrator | 2026-04-08 00:56:54 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:54.563918 | orchestrator | 2026-04-08 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:57.607334 | orchestrator | 2026-04-08 00:56:57 | INFO  | Task af7d79bc-de3f-4341-8a76-bd9cd938c05b is in state STARTED 2026-04-08 00:56:57.609603 | orchestrator | 2026-04-08 00:56:57 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:56:57.609795 | orchestrator | 2026-04-08 00:56:57 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:56:57.609967 | orchestrator | 2026-04-08 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:00.667471 | orchestrator | 2026-04-08 00:57:00 | INFO  | Task af7d79bc-de3f-4341-8a76-bd9cd938c05b is in state STARTED 2026-04-08 00:57:00.669273 | orchestrator | 2026-04-08 00:57:00 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:57:00.671273 | orchestrator | 2026-04-08 00:57:00 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:00.671309 | orchestrator | 2026-04-08 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:03.718100 | orchestrator | 2026-04-08 00:57:03 | INFO  | Task af7d79bc-de3f-4341-8a76-bd9cd938c05b is in state STARTED 2026-04-08 00:57:03.718802 | orchestrator | 2026-04-08 00:57:03 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:57:03.720097 | orchestrator | 2026-04-08 00:57:03 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:03.720368 | orchestrator | 2026-04-08 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:06.762568 | orchestrator | 2026-04-08 00:57:06 | INFO  | Task af7d79bc-de3f-4341-8a76-bd9cd938c05b is in state STARTED 2026-04-08 00:57:06.762637 | orchestrator | 2026-04-08 00:57:06 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:57:06.767162 | orchestrator | 2026-04-08 00:57:06 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:06.767288 | orchestrator | 2026-04-08 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:09.811027 | orchestrator | 2026-04-08 00:57:09 | INFO  | Task af7d79bc-de3f-4341-8a76-bd9cd938c05b is in state STARTED 2026-04-08 00:57:09.812074 | orchestrator | 2026-04-08 00:57:09 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:57:09.814171 | orchestrator | 2026-04-08 00:57:09 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:09.814251 | orchestrator | 2026-04-08 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:12.881819 | orchestrator | 2026-04-08 00:57:12 | INFO  | Task af7d79bc-de3f-4341-8a76-bd9cd938c05b is in state SUCCESS 2026-04-08 00:57:12.883365 | orchestrator | 2026-04-08 00:57:12 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:12.884632 | orchestrator | 2026-04-08 00:57:12 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:57:12.886141 | orchestrator | 2026-04-08 00:57:12 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:12.886293 | orchestrator | 2026-04-08 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:15.936353 | orchestrator | 2026-04-08 00:57:15 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:15.937081 | orchestrator | 2026-04-08 00:57:15 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:57:15.938161 | orchestrator | 2026-04-08 00:57:15 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:15.938249 | orchestrator | 2026-04-08 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:18.985741 | orchestrator | 2026-04-08 00:57:18 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:18.985853 | orchestrator | 2026-04-08 00:57:18 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:57:18.985872 | orchestrator | 2026-04-08 00:57:18 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:18.985886 | orchestrator | 2026-04-08 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:22.043339 | orchestrator | 2026-04-08 00:57:22 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:22.043852 | orchestrator | 2026-04-08 00:57:22 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:57:22.045086 | orchestrator | 2026-04-08 00:57:22 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:22.045157 | orchestrator | 2026-04-08 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:25.095542 | orchestrator | 2026-04-08 00:57:25 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:25.096043 | orchestrator | 2026-04-08 00:57:25 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state STARTED 2026-04-08 00:57:25.096860 | orchestrator | 2026-04-08 00:57:25 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:25.096881 | orchestrator | 2026-04-08 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:28.142282 | orchestrator | 2026-04-08 00:57:28 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:28.145861 | orchestrator | 2026-04-08 00:57:28 | INFO  | Task 656e4d79-c3d6-47fb-b71e-b31dd26b46fe is in state SUCCESS 2026-04-08 00:57:28.147993 | orchestrator | 2026-04-08 00:57:28.148054 | orchestrator | 2026-04-08 00:57:28.148063 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-08 00:57:28.148070 | orchestrator | 2026-04-08 00:57:28.148084 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-08 00:57:28.148091 | orchestrator | Wednesday 08 April 2026 00:56:35 +0000 (0:00:00.201) 0:00:00.201 ******* 2026-04-08 00:57:28.148097 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-08 00:57:28.148104 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-08 00:57:28.148111 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-08 00:57:28.148118 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-08 00:57:28.148124 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-08 00:57:28.148131 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-08 00:57:28.148137 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-08 00:57:28.148206 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-08 00:57:28.148214 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-08 00:57:28.148220 | orchestrator | 2026-04-08 00:57:28.148227 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-08 00:57:28.148233 | orchestrator | Wednesday 08 April 2026 00:56:41 +0000 (0:00:05.186) 0:00:05.388 ******* 2026-04-08 00:57:28.148240 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-08 00:57:28.148246 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-08 00:57:28.148253 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-08 00:57:28.148259 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-08 00:57:28.148266 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-08 00:57:28.148272 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-08 00:57:28.148278 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-08 00:57:28.148285 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-08 00:57:28.148292 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-08 00:57:28.148298 | orchestrator | 2026-04-08 00:57:28.148305 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-08 00:57:28.148312 | orchestrator | Wednesday 08 April 2026 00:56:45 +0000 (0:00:04.536) 0:00:09.925 ******* 2026-04-08 00:57:28.148319 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-08 00:57:28.148326 | orchestrator | 2026-04-08 00:57:28.148333 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-08 00:57:28.148339 | orchestrator | Wednesday 08 April 2026 00:56:46 +0000 (0:00:00.880) 0:00:10.806 ******* 2026-04-08 00:57:28.148347 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-08 00:57:28.148355 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-08 00:57:28.148362 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-08 00:57:28.148369 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-08 00:57:28.148376 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-08 00:57:28.148383 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-08 00:57:28.148390 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-08 00:57:28.148397 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-08 00:57:28.148404 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-08 00:57:28.148410 | orchestrator | 2026-04-08 00:57:28.148417 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-08 00:57:28.148436 | orchestrator | Wednesday 08 April 2026 00:56:59 +0000 (0:00:12.928) 0:00:23.734 ******* 2026-04-08 00:57:28.148443 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-08 00:57:28.148450 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-08 00:57:28.148458 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-08 00:57:28.148465 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-08 00:57:28.148489 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-08 00:57:28.148496 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-08 00:57:28.148504 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-08 00:57:28.148510 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-08 00:57:28.148517 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-08 00:57:28.148524 | orchestrator | 2026-04-08 00:57:28.148531 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-08 00:57:28.148538 | orchestrator | Wednesday 08 April 2026 00:57:02 +0000 (0:00:03.433) 0:00:27.168 ******* 2026-04-08 00:57:28.148545 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-08 00:57:28.148552 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-08 00:57:28.148559 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-08 00:57:28.148566 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-08 00:57:28.148573 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-08 00:57:28.148580 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-08 00:57:28.148587 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-08 00:57:28.148594 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-08 00:57:28.148601 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-08 00:57:28.148608 | orchestrator | 2026-04-08 00:57:28.148615 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:57:28.148622 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:57:28.148628 | orchestrator | 2026-04-08 00:57:28.148634 | orchestrator | 2026-04-08 00:57:28.148640 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:57:28.148645 | orchestrator | Wednesday 08 April 2026 00:57:10 +0000 (0:00:07.094) 0:00:34.263 ******* 2026-04-08 00:57:28.148650 | orchestrator | =============================================================================== 2026-04-08 00:57:28.148656 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.93s 2026-04-08 00:57:28.148661 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.09s 2026-04-08 00:57:28.148667 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.19s 2026-04-08 00:57:28.148672 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.54s 2026-04-08 00:57:28.148678 | orchestrator | Check if target directories exist --------------------------------------- 3.43s 2026-04-08 00:57:28.148683 | orchestrator | Create share directory -------------------------------------------------- 0.88s 2026-04-08 00:57:28.148688 | orchestrator | 2026-04-08 00:57:28.148695 | orchestrator | 2026-04-08 00:57:28.148700 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:57:28.148706 | orchestrator | 2026-04-08 00:57:28.148711 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:57:28.148716 | orchestrator | Wednesday 08 April 2026 00:55:50 +0000 (0:00:00.312) 0:00:00.312 ******* 2026-04-08 00:57:28.148722 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:57:28.148729 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:57:28.148735 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:57:28.148740 | orchestrator | 2026-04-08 00:57:28.148746 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:57:28.148751 | orchestrator | Wednesday 08 April 2026 00:55:50 +0000 (0:00:00.286) 0:00:00.599 ******* 2026-04-08 00:57:28.148761 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-08 00:57:28.148767 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-08 00:57:28.148772 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-08 00:57:28.148779 | orchestrator | 2026-04-08 00:57:28.148785 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-08 00:57:28.148791 | orchestrator | 2026-04-08 00:57:28.148797 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-08 00:57:28.148803 | orchestrator | Wednesday 08 April 2026 00:55:50 +0000 (0:00:00.337) 0:00:00.936 ******* 2026-04-08 00:57:28.148809 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:57:28.148816 | orchestrator | 2026-04-08 00:57:28.148823 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-08 00:57:28.148829 | orchestrator | Wednesday 08 April 2026 00:55:51 +0000 (0:00:00.562) 0:00:01.499 ******* 2026-04-08 00:57:28.148854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:57:28.148868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:57:28.148887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:57:28.148905 | orchestrator | 2026-04-08 00:57:28.148917 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-08 00:57:28.148924 | orchestrator | Wednesday 08 April 2026 00:55:52 +0000 (0:00:01.444) 0:00:02.944 ******* 2026-04-08 00:57:28.148931 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:57:28.148937 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:57:28.148944 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:57:28.148951 | orchestrator | 2026-04-08 00:57:28.148957 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-08 00:57:28.148970 | orchestrator | Wednesday 08 April 2026 00:55:53 +0000 (0:00:00.280) 0:00:03.224 ******* 2026-04-08 00:57:28.148976 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-08 00:57:28.148983 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-08 00:57:28.148990 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-08 00:57:28.148997 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-08 00:57:28.149004 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-08 00:57:28.149011 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-08 00:57:28.149017 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-08 00:57:28.149023 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-08 00:57:28.149030 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-08 00:57:28.149037 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-08 00:57:28.149043 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-08 00:57:28.149050 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-08 00:57:28.149057 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-08 00:57:28.149064 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-08 00:57:28.149071 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-08 00:57:28.149078 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-08 00:57:28.149085 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-08 00:57:28.149092 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-08 00:57:28.149099 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-08 00:57:28.149105 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-08 00:57:28.149112 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-08 00:57:28.149119 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-08 00:57:28.149130 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-08 00:57:28.149136 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-08 00:57:28.149143 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-08 00:57:28.149151 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-08 00:57:28.149157 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-08 00:57:28.149179 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-08 00:57:28.149186 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-08 00:57:28.149191 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-08 00:57:28.149197 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-08 00:57:28.149208 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-08 00:57:28.149241 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-08 00:57:28.149248 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-08 00:57:28.149253 | orchestrator | 2026-04-08 00:57:28.149259 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:57:28.149265 | orchestrator | Wednesday 08 April 2026 00:55:53 +0000 (0:00:00.657) 0:00:03.882 ******* 2026-04-08 00:57:28.149271 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:57:28.149277 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:57:28.149282 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:57:28.149288 | orchestrator | 2026-04-08 00:57:28.149294 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:57:28.149300 | orchestrator | Wednesday 08 April 2026 00:55:54 +0000 (0:00:00.363) 0:00:04.246 ******* 2026-04-08 00:57:28.149306 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149312 | orchestrator | 2026-04-08 00:57:28.149317 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:57:28.149323 | orchestrator | Wednesday 08 April 2026 00:55:54 +0000 (0:00:00.112) 0:00:04.358 ******* 2026-04-08 00:57:28.149329 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149334 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:28.149340 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:28.149345 | orchestrator | 2026-04-08 00:57:28.149351 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:57:28.149357 | orchestrator | Wednesday 08 April 2026 00:55:54 +0000 (0:00:00.258) 0:00:04.617 ******* 2026-04-08 00:57:28.149362 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:57:28.149368 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:57:28.149374 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:57:28.149379 | orchestrator | 2026-04-08 00:57:28.149385 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:57:28.149390 | orchestrator | Wednesday 08 April 2026 00:55:54 +0000 (0:00:00.279) 0:00:04.896 ******* 2026-04-08 00:57:28.149396 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149401 | orchestrator | 2026-04-08 00:57:28.149407 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:57:28.149413 | orchestrator | Wednesday 08 April 2026 00:55:54 +0000 (0:00:00.127) 0:00:05.023 ******* 2026-04-08 00:57:28.149418 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149424 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:28.149430 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:28.149437 | orchestrator | 2026-04-08 00:57:28.149443 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:57:28.149449 | orchestrator | Wednesday 08 April 2026 00:55:55 +0000 (0:00:00.449) 0:00:05.473 ******* 2026-04-08 00:57:28.149455 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:57:28.149460 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:57:28.149466 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:57:28.149471 | orchestrator | 2026-04-08 00:57:28.149481 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:57:28.149487 | orchestrator | Wednesday 08 April 2026 00:55:55 +0000 (0:00:00.289) 0:00:05.762 ******* 2026-04-08 00:57:28.149493 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149499 | orchestrator | 2026-04-08 00:57:28.149505 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:57:28.149512 | orchestrator | Wednesday 08 April 2026 00:55:55 +0000 (0:00:00.118) 0:00:05.881 ******* 2026-04-08 00:57:28.149521 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149525 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:28.149529 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:28.149532 | orchestrator | 2026-04-08 00:57:28.149536 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:57:28.149545 | orchestrator | Wednesday 08 April 2026 00:55:56 +0000 (0:00:00.265) 0:00:06.146 ******* 2026-04-08 00:57:28.149549 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:57:28.149553 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:57:28.149557 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:57:28.149561 | orchestrator | 2026-04-08 00:57:28.149565 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:57:28.149568 | orchestrator | Wednesday 08 April 2026 00:55:56 +0000 (0:00:00.314) 0:00:06.460 ******* 2026-04-08 00:57:28.149572 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149576 | orchestrator | 2026-04-08 00:57:28.149580 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:57:28.149583 | orchestrator | Wednesday 08 April 2026 00:55:56 +0000 (0:00:00.123) 0:00:06.584 ******* 2026-04-08 00:57:28.149587 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149591 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:28.149594 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:28.149598 | orchestrator | 2026-04-08 00:57:28.149602 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:57:28.149606 | orchestrator | Wednesday 08 April 2026 00:55:56 +0000 (0:00:00.435) 0:00:07.019 ******* 2026-04-08 00:57:28.149610 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:57:28.149613 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:57:28.149617 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:57:28.149621 | orchestrator | 2026-04-08 00:57:28.149624 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:57:28.149628 | orchestrator | Wednesday 08 April 2026 00:55:57 +0000 (0:00:00.299) 0:00:07.319 ******* 2026-04-08 00:57:28.149632 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149636 | orchestrator | 2026-04-08 00:57:28.149639 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:57:28.149643 | orchestrator | Wednesday 08 April 2026 00:55:57 +0000 (0:00:00.118) 0:00:07.438 ******* 2026-04-08 00:57:28.149647 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149651 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:28.149654 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:28.149658 | orchestrator | 2026-04-08 00:57:28.149662 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:57:28.149665 | orchestrator | Wednesday 08 April 2026 00:55:57 +0000 (0:00:00.283) 0:00:07.721 ******* 2026-04-08 00:57:28.149669 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:57:28.149673 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:57:28.149677 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:57:28.149680 | orchestrator | 2026-04-08 00:57:28.149684 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:57:28.149688 | orchestrator | Wednesday 08 April 2026 00:55:58 +0000 (0:00:00.498) 0:00:08.220 ******* 2026-04-08 00:57:28.149692 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149696 | orchestrator | 2026-04-08 00:57:28.149699 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:57:28.149703 | orchestrator | Wednesday 08 April 2026 00:55:58 +0000 (0:00:00.127) 0:00:08.348 ******* 2026-04-08 00:57:28.149707 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149710 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:28.149714 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:28.149718 | orchestrator | 2026-04-08 00:57:28.149722 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:57:28.149725 | orchestrator | Wednesday 08 April 2026 00:55:58 +0000 (0:00:00.300) 0:00:08.648 ******* 2026-04-08 00:57:28.149733 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:57:28.149737 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:57:28.149740 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:57:28.149744 | orchestrator | 2026-04-08 00:57:28.149748 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:57:28.149752 | orchestrator | Wednesday 08 April 2026 00:55:58 +0000 (0:00:00.296) 0:00:08.944 ******* 2026-04-08 00:57:28.149756 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149759 | orchestrator | 2026-04-08 00:57:28.149763 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:57:28.149767 | orchestrator | Wednesday 08 April 2026 00:55:58 +0000 (0:00:00.128) 0:00:09.073 ******* 2026-04-08 00:57:28.149771 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149774 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:28.149778 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:28.149782 | orchestrator | 2026-04-08 00:57:28.149785 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:57:28.149789 | orchestrator | Wednesday 08 April 2026 00:55:59 +0000 (0:00:00.283) 0:00:09.357 ******* 2026-04-08 00:57:28.149793 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:57:28.149797 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:57:28.149800 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:57:28.149804 | orchestrator | 2026-04-08 00:57:28.149808 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:57:28.149811 | orchestrator | Wednesday 08 April 2026 00:55:59 +0000 (0:00:00.559) 0:00:09.916 ******* 2026-04-08 00:57:28.149815 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149819 | orchestrator | 2026-04-08 00:57:28.149823 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:57:28.149826 | orchestrator | Wednesday 08 April 2026 00:55:59 +0000 (0:00:00.123) 0:00:10.040 ******* 2026-04-08 00:57:28.149830 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149834 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:28.149840 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:28.149844 | orchestrator | 2026-04-08 00:57:28.149847 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:57:28.149851 | orchestrator | Wednesday 08 April 2026 00:56:00 +0000 (0:00:00.278) 0:00:10.318 ******* 2026-04-08 00:57:28.149855 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:57:28.149859 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:57:28.149862 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:57:28.149866 | orchestrator | 2026-04-08 00:57:28.149870 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:57:28.149874 | orchestrator | Wednesday 08 April 2026 00:56:00 +0000 (0:00:00.368) 0:00:10.687 ******* 2026-04-08 00:57:28.149878 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149881 | orchestrator | 2026-04-08 00:57:28.149888 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:57:28.149892 | orchestrator | Wednesday 08 April 2026 00:56:00 +0000 (0:00:00.119) 0:00:10.806 ******* 2026-04-08 00:57:28.149896 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149899 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:28.149903 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:28.149907 | orchestrator | 2026-04-08 00:57:28.149911 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:57:28.149914 | orchestrator | Wednesday 08 April 2026 00:56:00 +0000 (0:00:00.273) 0:00:11.080 ******* 2026-04-08 00:57:28.149918 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:57:28.149922 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:57:28.149926 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:57:28.149929 | orchestrator | 2026-04-08 00:57:28.149933 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:57:28.149937 | orchestrator | Wednesday 08 April 2026 00:56:01 +0000 (0:00:00.506) 0:00:11.587 ******* 2026-04-08 00:57:28.149941 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149951 | orchestrator | 2026-04-08 00:57:28.149955 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:57:28.149959 | orchestrator | Wednesday 08 April 2026 00:56:01 +0000 (0:00:00.137) 0:00:11.724 ******* 2026-04-08 00:57:28.149962 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.149966 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:28.149970 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:28.149973 | orchestrator | 2026-04-08 00:57:28.149977 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-08 00:57:28.149981 | orchestrator | Wednesday 08 April 2026 00:56:01 +0000 (0:00:00.287) 0:00:12.011 ******* 2026-04-08 00:57:28.149985 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:57:28.149988 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:57:28.149992 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:57:28.149996 | orchestrator | 2026-04-08 00:57:28.149999 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-08 00:57:28.150003 | orchestrator | Wednesday 08 April 2026 00:56:03 +0000 (0:00:01.715) 0:00:13.726 ******* 2026-04-08 00:57:28.150007 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-08 00:57:28.150011 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-08 00:57:28.150065 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-08 00:57:28.150069 | orchestrator | 2026-04-08 00:57:28.150073 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-08 00:57:28.150077 | orchestrator | Wednesday 08 April 2026 00:56:05 +0000 (0:00:02.339) 0:00:16.066 ******* 2026-04-08 00:57:28.150080 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-08 00:57:28.150085 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-08 00:57:28.150088 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-08 00:57:28.150092 | orchestrator | 2026-04-08 00:57:28.150096 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-08 00:57:28.150100 | orchestrator | Wednesday 08 April 2026 00:56:08 +0000 (0:00:02.085) 0:00:18.151 ******* 2026-04-08 00:57:28.150103 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-08 00:57:28.150107 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-08 00:57:28.150111 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-08 00:57:28.150115 | orchestrator | 2026-04-08 00:57:28.150119 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-08 00:57:28.150123 | orchestrator | Wednesday 08 April 2026 00:56:09 +0000 (0:00:01.712) 0:00:19.864 ******* 2026-04-08 00:57:28.150126 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.150130 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:28.150134 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:28.150138 | orchestrator | 2026-04-08 00:57:28.150141 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-08 00:57:28.150145 | orchestrator | Wednesday 08 April 2026 00:56:10 +0000 (0:00:00.295) 0:00:20.159 ******* 2026-04-08 00:57:28.150149 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.150153 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:28.150156 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:28.150160 | orchestrator | 2026-04-08 00:57:28.150224 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-08 00:57:28.150228 | orchestrator | Wednesday 08 April 2026 00:56:10 +0000 (0:00:00.269) 0:00:20.428 ******* 2026-04-08 00:57:28.150231 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:57:28.150240 | orchestrator | 2026-04-08 00:57:28.150247 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-08 00:57:28.150251 | orchestrator | Wednesday 08 April 2026 00:56:11 +0000 (0:00:00.795) 0:00:21.224 ******* 2026-04-08 00:57:28.150263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:57:28.150360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:57:28.150403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:57:28.150407 | orchestrator | 2026-04-08 00:57:28.150411 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-08 00:57:28.150415 | orchestrator | Wednesday 08 April 2026 00:56:12 +0000 (0:00:01.512) 0:00:22.736 ******* 2026-04-08 00:57:28.150426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:57:28.150434 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.150438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:57:28.150442 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:28.150453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:57:28.150461 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:28.150465 | orchestrator | 2026-04-08 00:57:28.150468 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-08 00:57:28.150472 | orchestrator | Wednesday 08 April 2026 00:56:13 +0000 (0:00:00.917) 0:00:23.654 ******* 2026-04-08 00:57:28.150476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:57:28.150480 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.150491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:57:28.150498 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:28.150503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:57:28.150510 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:28.150514 | orchestrator | 2026-04-08 00:57:28.150517 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-08 00:57:28.150521 | orchestrator | Wednesday 08 April 2026 00:56:14 +0000 (0:00:01.150) 0:00:24.804 ******* 2026-04-08 00:57:28.150532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:57:28.150537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:57:28.150550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:57:28.150555 | orchestrator | 2026-04-08 00:57:28.150559 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-08 00:57:28.150563 | orchestrator | Wednesday 08 April 2026 00:56:16 +0000 (0:00:01.344) 0:00:26.149 ******* 2026-04-08 00:57:28.150566 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:28.150570 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:28.150574 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:28.150578 | orchestrator | 2026-04-08 00:57:28.150581 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-08 00:57:28.150585 | orchestrator | Wednesday 08 April 2026 00:56:16 +0000 (0:00:00.277) 0:00:26.426 ******* 2026-04-08 00:57:28.150589 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:57:28.150593 | orchestrator | 2026-04-08 00:57:28.150597 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-08 00:57:28.150601 | orchestrator | Wednesday 08 April 2026 00:56:16 +0000 (0:00:00.623) 0:00:27.050 ******* 2026-04-08 00:57:28.150609 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:57:28.150613 | orchestrator | 2026-04-08 00:57:28.150617 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-08 00:57:28.150620 | orchestrator | Wednesday 08 April 2026 00:56:19 +0000 (0:00:02.555) 0:00:29.605 ******* 2026-04-08 00:57:28.150624 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:57:28.150628 | orchestrator | 2026-04-08 00:57:28.150632 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-08 00:57:28.150636 | orchestrator | Wednesday 08 April 2026 00:56:22 +0000 (0:00:02.935) 0:00:32.541 ******* 2026-04-08 00:57:28.150639 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:57:28.150643 | orchestrator | 2026-04-08 00:57:28.150647 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-08 00:57:28.150650 | orchestrator | Wednesday 08 April 2026 00:56:39 +0000 (0:00:17.158) 0:00:49.700 ******* 2026-04-08 00:57:28.150654 | orchestrator | 2026-04-08 00:57:28.150658 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-08 00:57:28.150662 | orchestrator | Wednesday 08 April 2026 00:56:39 +0000 (0:00:00.062) 0:00:49.763 ******* 2026-04-08 00:57:28.150668 | orchestrator | 2026-04-08 00:57:28.150674 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-08 00:57:28.150679 | orchestrator | Wednesday 08 April 2026 00:56:39 +0000 (0:00:00.063) 0:00:49.826 ******* 2026-04-08 00:57:28.150684 | orchestrator | 2026-04-08 00:57:28.150689 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-08 00:57:28.150695 | orchestrator | Wednesday 08 April 2026 00:56:39 +0000 (0:00:00.064) 0:00:49.890 ******* 2026-04-08 00:57:28.150700 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:57:28.150705 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:57:28.150710 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:57:28.150715 | orchestrator | 2026-04-08 00:57:28.150727 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:57:28.150739 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-08 00:57:28.150745 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-08 00:57:28.150752 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-08 00:57:28.150758 | orchestrator | 2026-04-08 00:57:28.150764 | orchestrator | 2026-04-08 00:57:28.150775 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:57:28.150780 | orchestrator | Wednesday 08 April 2026 00:57:26 +0000 (0:00:47.169) 0:01:37.060 ******* 2026-04-08 00:57:28.150786 | orchestrator | =============================================================================== 2026-04-08 00:57:28.150793 | orchestrator | horizon : Restart horizon container ------------------------------------ 47.17s 2026-04-08 00:57:28.150799 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.16s 2026-04-08 00:57:28.150806 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.94s 2026-04-08 00:57:28.150812 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.56s 2026-04-08 00:57:28.150819 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.34s 2026-04-08 00:57:28.150825 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.09s 2026-04-08 00:57:28.150831 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.72s 2026-04-08 00:57:28.150838 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.71s 2026-04-08 00:57:28.150844 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.51s 2026-04-08 00:57:28.150850 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.44s 2026-04-08 00:57:28.150862 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.34s 2026-04-08 00:57:28.150868 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.15s 2026-04-08 00:57:28.150875 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.92s 2026-04-08 00:57:28.150881 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2026-04-08 00:57:28.150885 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2026-04-08 00:57:28.150889 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2026-04-08 00:57:28.150892 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2026-04-08 00:57:28.150896 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-04-08 00:57:28.150900 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2026-04-08 00:57:28.150903 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2026-04-08 00:57:28.150907 | orchestrator | 2026-04-08 00:57:28 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:28.150911 | orchestrator | 2026-04-08 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:31.194552 | orchestrator | 2026-04-08 00:57:31 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:31.196418 | orchestrator | 2026-04-08 00:57:31 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:31.196489 | orchestrator | 2026-04-08 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:34.233020 | orchestrator | 2026-04-08 00:57:34 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:34.235594 | orchestrator | 2026-04-08 00:57:34 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:34.235649 | orchestrator | 2026-04-08 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:37.281136 | orchestrator | 2026-04-08 00:57:37 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:37.284024 | orchestrator | 2026-04-08 00:57:37 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:37.284123 | orchestrator | 2026-04-08 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:40.329067 | orchestrator | 2026-04-08 00:57:40 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:40.330432 | orchestrator | 2026-04-08 00:57:40 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:40.330513 | orchestrator | 2026-04-08 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:43.374641 | orchestrator | 2026-04-08 00:57:43 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:43.376227 | orchestrator | 2026-04-08 00:57:43 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:43.376313 | orchestrator | 2026-04-08 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:46.419641 | orchestrator | 2026-04-08 00:57:46 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:46.421917 | orchestrator | 2026-04-08 00:57:46 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:46.422075 | orchestrator | 2026-04-08 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:49.467270 | orchestrator | 2026-04-08 00:57:49 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:49.469081 | orchestrator | 2026-04-08 00:57:49 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:49.469145 | orchestrator | 2026-04-08 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:52.517307 | orchestrator | 2026-04-08 00:57:52 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:52.520971 | orchestrator | 2026-04-08 00:57:52 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:52.521042 | orchestrator | 2026-04-08 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:55.567639 | orchestrator | 2026-04-08 00:57:55 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:55.569177 | orchestrator | 2026-04-08 00:57:55 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:55.569215 | orchestrator | 2026-04-08 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:58.614951 | orchestrator | 2026-04-08 00:57:58 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:57:58.616672 | orchestrator | 2026-04-08 00:57:58 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:57:58.616767 | orchestrator | 2026-04-08 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:01.666384 | orchestrator | 2026-04-08 00:58:01 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:58:01.667920 | orchestrator | 2026-04-08 00:58:01 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:58:01.667977 | orchestrator | 2026-04-08 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:04.724423 | orchestrator | 2026-04-08 00:58:04 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:58:04.725119 | orchestrator | 2026-04-08 00:58:04 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:58:04.725207 | orchestrator | 2026-04-08 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:07.770772 | orchestrator | 2026-04-08 00:58:07 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state STARTED 2026-04-08 00:58:07.772898 | orchestrator | 2026-04-08 00:58:07 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:58:07.772953 | orchestrator | 2026-04-08 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:10.823375 | orchestrator | 2026-04-08 00:58:10 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:10.825325 | orchestrator | 2026-04-08 00:58:10 | INFO  | Task ab4321f9-9c0f-4b45-8322-8459d0cf1c72 is in state STARTED 2026-04-08 00:58:10.829110 | orchestrator | 2026-04-08 00:58:10 | INFO  | Task 66402946-9939-4646-9baa-0be4516a32f3 is in state SUCCESS 2026-04-08 00:58:10.831093 | orchestrator | 2026-04-08 00:58:10 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:58:10.833065 | orchestrator | 2026-04-08 00:58:10 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:10.833162 | orchestrator | 2026-04-08 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:13.874701 | orchestrator | 2026-04-08 00:58:13 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:13.883098 | orchestrator | 2026-04-08 00:58:13 | INFO  | Task ab4321f9-9c0f-4b45-8322-8459d0cf1c72 is in state STARTED 2026-04-08 00:58:13.883215 | orchestrator | 2026-04-08 00:58:13 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:58:13.883253 | orchestrator | 2026-04-08 00:58:13 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:13.883273 | orchestrator | 2026-04-08 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:16.941393 | orchestrator | 2026-04-08 00:58:16 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:16.942179 | orchestrator | 2026-04-08 00:58:16 | INFO  | Task ab4321f9-9c0f-4b45-8322-8459d0cf1c72 is in state SUCCESS 2026-04-08 00:58:16.942201 | orchestrator | 2026-04-08 00:58:16 | INFO  | Task 53a4ce7a-74a4-4954-8766-5a3579527e45 is in state STARTED 2026-04-08 00:58:16.942209 | orchestrator | 2026-04-08 00:58:16 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:58:16.942216 | orchestrator | 2026-04-08 00:58:16 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:58:16.953660 | orchestrator | 2026-04-08 00:58:16 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:16.953737 | orchestrator | 2026-04-08 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:20.048538 | orchestrator | 2026-04-08 00:58:20 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:20.049217 | orchestrator | 2026-04-08 00:58:20 | INFO  | Task 53a4ce7a-74a4-4954-8766-5a3579527e45 is in state STARTED 2026-04-08 00:58:20.049775 | orchestrator | 2026-04-08 00:58:20 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:58:20.050354 | orchestrator | 2026-04-08 00:58:20 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:58:20.051179 | orchestrator | 2026-04-08 00:58:20 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:20.051231 | orchestrator | 2026-04-08 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:23.091562 | orchestrator | 2026-04-08 00:58:23 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:23.092239 | orchestrator | 2026-04-08 00:58:23 | INFO  | Task 53a4ce7a-74a4-4954-8766-5a3579527e45 is in state STARTED 2026-04-08 00:58:23.094178 | orchestrator | 2026-04-08 00:58:23 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:58:23.097774 | orchestrator | 2026-04-08 00:58:23 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:58:23.099023 | orchestrator | 2026-04-08 00:58:23 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:23.099160 | orchestrator | 2026-04-08 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:26.141076 | orchestrator | 2026-04-08 00:58:26 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:26.144365 | orchestrator | 2026-04-08 00:58:26 | INFO  | Task 53a4ce7a-74a4-4954-8766-5a3579527e45 is in state STARTED 2026-04-08 00:58:26.147463 | orchestrator | 2026-04-08 00:58:26 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state STARTED 2026-04-08 00:58:26.149642 | orchestrator | 2026-04-08 00:58:26 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:58:26.151648 | orchestrator | 2026-04-08 00:58:26 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:26.151729 | orchestrator | 2026-04-08 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:29.197055 | orchestrator | 2026-04-08 00:58:29 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:29.204871 | orchestrator | 2026-04-08 00:58:29 | INFO  | Task 53a4ce7a-74a4-4954-8766-5a3579527e45 is in state STARTED 2026-04-08 00:58:29.211647 | orchestrator | 2026-04-08 00:58:29 | INFO  | Task 2bde4798-2b39-4a32-9986-12968acf5782 is in state SUCCESS 2026-04-08 00:58:29.212013 | orchestrator | 2026-04-08 00:58:29.212041 | orchestrator | 2026-04-08 00:58:29.212050 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-08 00:58:29.212059 | orchestrator | 2026-04-08 00:58:29.212066 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-08 00:58:29.212073 | orchestrator | Wednesday 08 April 2026 00:57:14 +0000 (0:00:00.382) 0:00:00.382 ******* 2026-04-08 00:58:29.212080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-08 00:58:29.212089 | orchestrator | 2026-04-08 00:58:29.212096 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-08 00:58:29.212103 | orchestrator | Wednesday 08 April 2026 00:57:14 +0000 (0:00:00.256) 0:00:00.639 ******* 2026-04-08 00:58:29.212111 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-08 00:58:29.212168 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-08 00:58:29.212178 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-08 00:58:29.212185 | orchestrator | 2026-04-08 00:58:29.212207 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-08 00:58:29.212214 | orchestrator | Wednesday 08 April 2026 00:57:16 +0000 (0:00:01.531) 0:00:02.171 ******* 2026-04-08 00:58:29.212221 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-08 00:58:29.212227 | orchestrator | 2026-04-08 00:58:29.212234 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-08 00:58:29.212241 | orchestrator | Wednesday 08 April 2026 00:57:17 +0000 (0:00:01.184) 0:00:03.356 ******* 2026-04-08 00:58:29.212248 | orchestrator | changed: [testbed-manager] 2026-04-08 00:58:29.212254 | orchestrator | 2026-04-08 00:58:29.212261 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-08 00:58:29.212268 | orchestrator | Wednesday 08 April 2026 00:57:18 +0000 (0:00:00.896) 0:00:04.252 ******* 2026-04-08 00:58:29.212274 | orchestrator | changed: [testbed-manager] 2026-04-08 00:58:29.212281 | orchestrator | 2026-04-08 00:58:29.212288 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-08 00:58:29.212294 | orchestrator | Wednesday 08 April 2026 00:57:19 +0000 (0:00:00.823) 0:00:05.076 ******* 2026-04-08 00:58:29.212301 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-08 00:58:29.212307 | orchestrator | ok: [testbed-manager] 2026-04-08 00:58:29.212314 | orchestrator | 2026-04-08 00:58:29.212321 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-08 00:58:29.212327 | orchestrator | Wednesday 08 April 2026 00:57:58 +0000 (0:00:39.876) 0:00:44.952 ******* 2026-04-08 00:58:29.212334 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-08 00:58:29.212342 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-08 00:58:29.212348 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-08 00:58:29.212355 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-08 00:58:29.212361 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-08 00:58:29.212368 | orchestrator | 2026-04-08 00:58:29.212375 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-08 00:58:29.212381 | orchestrator | Wednesday 08 April 2026 00:58:03 +0000 (0:00:04.207) 0:00:49.159 ******* 2026-04-08 00:58:29.212388 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-08 00:58:29.212395 | orchestrator | 2026-04-08 00:58:29.212401 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-08 00:58:29.212424 | orchestrator | Wednesday 08 April 2026 00:58:03 +0000 (0:00:00.643) 0:00:49.803 ******* 2026-04-08 00:58:29.212431 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:58:29.212438 | orchestrator | 2026-04-08 00:58:29.212444 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-08 00:58:29.212451 | orchestrator | Wednesday 08 April 2026 00:58:03 +0000 (0:00:00.134) 0:00:49.938 ******* 2026-04-08 00:58:29.212457 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:58:29.212464 | orchestrator | 2026-04-08 00:58:29.212471 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-08 00:58:29.212480 | orchestrator | Wednesday 08 April 2026 00:58:04 +0000 (0:00:00.301) 0:00:50.240 ******* 2026-04-08 00:58:29.212492 | orchestrator | changed: [testbed-manager] 2026-04-08 00:58:29.212509 | orchestrator | 2026-04-08 00:58:29.212520 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-08 00:58:29.212531 | orchestrator | Wednesday 08 April 2026 00:58:05 +0000 (0:00:01.400) 0:00:51.640 ******* 2026-04-08 00:58:29.212542 | orchestrator | changed: [testbed-manager] 2026-04-08 00:58:29.212553 | orchestrator | 2026-04-08 00:58:29.212563 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-08 00:58:29.212573 | orchestrator | Wednesday 08 April 2026 00:58:06 +0000 (0:00:00.722) 0:00:52.363 ******* 2026-04-08 00:58:29.212584 | orchestrator | changed: [testbed-manager] 2026-04-08 00:58:29.212596 | orchestrator | 2026-04-08 00:58:29.212606 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-08 00:58:29.212616 | orchestrator | Wednesday 08 April 2026 00:58:06 +0000 (0:00:00.584) 0:00:52.947 ******* 2026-04-08 00:58:29.212628 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-08 00:58:29.212639 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-08 00:58:29.212859 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-08 00:58:29.212878 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-08 00:58:29.212885 | orchestrator | 2026-04-08 00:58:29.212892 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:58:29.212899 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:58:29.212907 | orchestrator | 2026-04-08 00:58:29.212919 | orchestrator | 2026-04-08 00:58:29.212950 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:58:29.212964 | orchestrator | Wednesday 08 April 2026 00:58:08 +0000 (0:00:01.478) 0:00:54.425 ******* 2026-04-08 00:58:29.212974 | orchestrator | =============================================================================== 2026-04-08 00:58:29.212985 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.88s 2026-04-08 00:58:29.212997 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.21s 2026-04-08 00:58:29.213009 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.53s 2026-04-08 00:58:29.213020 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.48s 2026-04-08 00:58:29.213032 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.40s 2026-04-08 00:58:29.213044 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.18s 2026-04-08 00:58:29.213055 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.90s 2026-04-08 00:58:29.213067 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.82s 2026-04-08 00:58:29.213082 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.72s 2026-04-08 00:58:29.213089 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.64s 2026-04-08 00:58:29.213095 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.58s 2026-04-08 00:58:29.213102 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2026-04-08 00:58:29.213108 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2026-04-08 00:58:29.213148 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-04-08 00:58:29.213156 | orchestrator | 2026-04-08 00:58:29.213162 | orchestrator | 2026-04-08 00:58:29.213169 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:58:29.213176 | orchestrator | 2026-04-08 00:58:29.213183 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:58:29.213189 | orchestrator | Wednesday 08 April 2026 00:58:11 +0000 (0:00:00.197) 0:00:00.197 ******* 2026-04-08 00:58:29.213196 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:58:29.213203 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:58:29.213210 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:58:29.213216 | orchestrator | 2026-04-08 00:58:29.213223 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:58:29.213230 | orchestrator | Wednesday 08 April 2026 00:58:12 +0000 (0:00:00.384) 0:00:00.582 ******* 2026-04-08 00:58:29.213236 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-08 00:58:29.213243 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-08 00:58:29.213250 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-08 00:58:29.213257 | orchestrator | 2026-04-08 00:58:29.213605 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-04-08 00:58:29.213619 | orchestrator | 2026-04-08 00:58:29.213626 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-04-08 00:58:29.213633 | orchestrator | Wednesday 08 April 2026 00:58:12 +0000 (0:00:00.524) 0:00:01.107 ******* 2026-04-08 00:58:29.213639 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:58:29.213646 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:58:29.213653 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:58:29.213660 | orchestrator | 2026-04-08 00:58:29.213666 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:58:29.213673 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:58:29.213681 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:58:29.213707 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:58:29.213715 | orchestrator | 2026-04-08 00:58:29.213729 | orchestrator | 2026-04-08 00:58:29.213737 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:58:29.213743 | orchestrator | Wednesday 08 April 2026 00:58:14 +0000 (0:00:01.127) 0:00:02.235 ******* 2026-04-08 00:58:29.213750 | orchestrator | =============================================================================== 2026-04-08 00:58:29.213756 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.13s 2026-04-08 00:58:29.213763 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2026-04-08 00:58:29.213770 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-04-08 00:58:29.213776 | orchestrator | 2026-04-08 00:58:29.214691 | orchestrator | 2026-04-08 00:58:29.214793 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:58:29.214812 | orchestrator | 2026-04-08 00:58:29.214820 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:58:29.214827 | orchestrator | Wednesday 08 April 2026 00:55:50 +0000 (0:00:00.312) 0:00:00.312 ******* 2026-04-08 00:58:29.214834 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:58:29.214841 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:58:29.214848 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:58:29.214854 | orchestrator | 2026-04-08 00:58:29.214861 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:58:29.214868 | orchestrator | Wednesday 08 April 2026 00:55:50 +0000 (0:00:00.273) 0:00:00.586 ******* 2026-04-08 00:58:29.214875 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-08 00:58:29.214892 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-08 00:58:29.214899 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-08 00:58:29.214905 | orchestrator | 2026-04-08 00:58:29.214912 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-08 00:58:29.214919 | orchestrator | 2026-04-08 00:58:29.214925 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-08 00:58:29.214932 | orchestrator | Wednesday 08 April 2026 00:55:50 +0000 (0:00:00.298) 0:00:00.884 ******* 2026-04-08 00:58:29.214940 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:58:29.214947 | orchestrator | 2026-04-08 00:58:29.214954 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-08 00:58:29.214960 | orchestrator | Wednesday 08 April 2026 00:55:51 +0000 (0:00:00.494) 0:00:01.378 ******* 2026-04-08 00:58:29.214979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:58:29.214991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:58:29.215029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:58:29.215046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215096 | orchestrator | 2026-04-08 00:58:29.215102 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-08 00:58:29.215109 | orchestrator | Wednesday 08 April 2026 00:55:53 +0000 (0:00:02.109) 0:00:03.488 ******* 2026-04-08 00:58:29.215184 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.215199 | orchestrator | 2026-04-08 00:58:29.215216 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-08 00:58:29.215228 | orchestrator | Wednesday 08 April 2026 00:55:53 +0000 (0:00:00.108) 0:00:03.596 ******* 2026-04-08 00:58:29.215239 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.215249 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:58:29.215259 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:58:29.215269 | orchestrator | 2026-04-08 00:58:29.215279 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-08 00:58:29.215291 | orchestrator | Wednesday 08 April 2026 00:55:53 +0000 (0:00:00.257) 0:00:03.854 ******* 2026-04-08 00:58:29.215302 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:58:29.215314 | orchestrator | 2026-04-08 00:58:29.215326 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-08 00:58:29.215337 | orchestrator | Wednesday 08 April 2026 00:55:54 +0000 (0:00:00.813) 0:00:04.668 ******* 2026-04-08 00:58:29.215348 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:58:29.215356 | orchestrator | 2026-04-08 00:58:29.215363 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-08 00:58:29.215369 | orchestrator | Wednesday 08 April 2026 00:55:55 +0000 (0:00:00.782) 0:00:05.451 ******* 2026-04-08 00:58:29.215382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:58:29.215390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:58:29.215398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:58:29.215420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215473 | orchestrator | 2026-04-08 00:58:29.215480 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-08 00:58:29.215486 | orchestrator | Wednesday 08 April 2026 00:55:58 +0000 (0:00:03.226) 0:00:08.677 ******* 2026-04-08 00:58:29.215500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-08 00:58:29.215508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:58:29.215518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:58:29.215526 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.215533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-08 00:58:29.215541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:58:29.215552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:58:29.215559 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:58:29.215572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-08 00:58:29.215579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:58:29.215590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:58:29.215597 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:58:29.215604 | orchestrator | 2026-04-08 00:58:29.215611 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-08 00:58:29.215618 | orchestrator | Wednesday 08 April 2026 00:55:59 +0000 (0:00:00.579) 0:00:09.257 ******* 2026-04-08 00:58:29.215625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-08 00:58:29.215640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:58:29.215652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:58:29.215659 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.215669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-08 00:58:29.215677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:58:29.215684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:58:29.215696 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:58:29.215703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-08 00:58:29.215715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:58:29.215722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:58:29.215729 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:58:29.215736 | orchestrator | 2026-04-08 00:58:29.215743 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-08 00:58:29.215749 | orchestrator | Wednesday 08 April 2026 00:56:00 +0000 (0:00:00.939) 0:00:10.197 ******* 2026-04-08 00:58:29.215760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:58:29.215772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:58:29.215784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:58:29.215792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215842 | orchestrator | 2026-04-08 00:58:29.215848 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-08 00:58:29.215855 | orchestrator | Wednesday 08 April 2026 00:56:03 +0000 (0:00:03.479) 0:00:13.676 ******* 2026-04-08 00:58:29.215867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:58:29.215875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:58:29.215886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:58:29.215898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:58:29.215909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:58:29.215916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:58:29.215923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:58:29.215953 | orchestrator | 2026-04-08 00:58:29.215960 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-08 00:58:29.215967 | orchestrator | Wednesday 08 April 2026 00:56:08 +0000 (0:00:05.327) 0:00:19.004 ******* 2026-04-08 00:58:29.215974 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:58:29.215980 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:58:29.215987 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:58:29.215994 | orchestrator | 2026-04-08 00:58:29.216000 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-08 00:58:29.216007 | orchestrator | Wednesday 08 April 2026 00:56:10 +0000 (0:00:01.481) 0:00:20.485 ******* 2026-04-08 00:58:29.216014 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.216020 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:58:29.216027 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:58:29.216034 | orchestrator | 2026-04-08 00:58:29.216040 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-08 00:58:29.216047 | orchestrator | Wednesday 08 April 2026 00:56:11 +0000 (0:00:01.031) 0:00:21.517 ******* 2026-04-08 00:58:29.216054 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.216060 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:58:29.216067 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:58:29.216074 | orchestrator | 2026-04-08 00:58:29.216080 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-08 00:58:29.216087 | orchestrator | Wednesday 08 April 2026 00:56:11 +0000 (0:00:00.324) 0:00:21.841 ******* 2026-04-08 00:58:29.216094 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.216101 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:58:29.216107 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:58:29.216114 | orchestrator | 2026-04-08 00:58:29.216167 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-08 00:58:29.216175 | orchestrator | Wednesday 08 April 2026 00:56:12 +0000 (0:00:00.283) 0:00:22.125 ******* 2026-04-08 00:58:29.216188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-08 00:58:29.216196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:58:29.216212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:58:29.216220 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.216227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-08 00:58:29.216234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:58:29.216245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:58:29.216253 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:58:29.216260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-08 00:58:29.216275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:58:29.216282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:58:29.216289 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:58:29.216296 | orchestrator | 2026-04-08 00:58:29.216302 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-08 00:58:29.216309 | orchestrator | Wednesday 08 April 2026 00:56:12 +0000 (0:00:00.554) 0:00:22.679 ******* 2026-04-08 00:58:29.216316 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.216322 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:58:29.216329 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:58:29.216335 | orchestrator | 2026-04-08 00:58:29.216342 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-08 00:58:29.216349 | orchestrator | Wednesday 08 April 2026 00:56:13 +0000 (0:00:00.496) 0:00:23.176 ******* 2026-04-08 00:58:29.216355 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-08 00:58:29.216363 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-08 00:58:29.216369 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-08 00:58:29.216376 | orchestrator | 2026-04-08 00:58:29.216383 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-08 00:58:29.216389 | orchestrator | Wednesday 08 April 2026 00:56:14 +0000 (0:00:01.510) 0:00:24.687 ******* 2026-04-08 00:58:29.216396 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:58:29.216402 | orchestrator | 2026-04-08 00:58:29.216409 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-08 00:58:29.216416 | orchestrator | Wednesday 08 April 2026 00:56:15 +0000 (0:00:01.131) 0:00:25.818 ******* 2026-04-08 00:58:29.216422 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.216429 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:58:29.216436 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:58:29.216442 | orchestrator | 2026-04-08 00:58:29.216449 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-08 00:58:29.216587 | orchestrator | Wednesday 08 April 2026 00:56:16 +0000 (0:00:00.504) 0:00:26.323 ******* 2026-04-08 00:58:29.216615 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-08 00:58:29.216626 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-08 00:58:29.216635 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:58:29.216645 | orchestrator | 2026-04-08 00:58:29.216662 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-08 00:58:29.216683 | orchestrator | Wednesday 08 April 2026 00:56:17 +0000 (0:00:01.076) 0:00:27.400 ******* 2026-04-08 00:58:29.216758 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:58:29.216772 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:58:29.216784 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:58:29.216794 | orchestrator | 2026-04-08 00:58:29.216806 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-08 00:58:29.216815 | orchestrator | Wednesday 08 April 2026 00:56:17 +0000 (0:00:00.402) 0:00:27.802 ******* 2026-04-08 00:58:29.216822 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-08 00:58:29.216829 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-08 00:58:29.216836 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-08 00:58:29.216842 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-08 00:58:29.216850 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-08 00:58:29.216856 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-08 00:58:29.216863 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-08 00:58:29.216870 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-08 00:58:29.216877 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-08 00:58:29.216884 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-08 00:58:29.216894 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-08 00:58:29.216905 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-08 00:58:29.216927 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-08 00:58:29.216941 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-08 00:58:29.216952 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-08 00:58:29.216962 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-08 00:58:29.216972 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-08 00:58:29.216982 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-08 00:58:29.216992 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-08 00:58:29.217003 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-08 00:58:29.217014 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-08 00:58:29.217024 | orchestrator | 2026-04-08 00:58:29.217035 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-08 00:58:29.217046 | orchestrator | Wednesday 08 April 2026 00:56:26 +0000 (0:00:09.106) 0:00:36.909 ******* 2026-04-08 00:58:29.217058 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-08 00:58:29.217069 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-08 00:58:29.217090 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-08 00:58:29.217098 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-08 00:58:29.217104 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-08 00:58:29.217111 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-08 00:58:29.217134 | orchestrator | 2026-04-08 00:58:29.217142 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-08 00:58:29.217148 | orchestrator | Wednesday 08 April 2026 00:56:29 +0000 (0:00:02.405) 0:00:39.314 ******* 2026-04-08 00:58:29.217165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:58:29.217173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:58:29.217186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-08 00:58:29.217194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:58:29.217207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:58:29.217215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:58:29.217226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:58:29.217234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:58:29.217245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:58:29.217252 | orchestrator | 2026-04-08 00:58:29.217259 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-08 00:58:29.217266 | orchestrator | Wednesday 08 April 2026 00:56:31 +0000 (0:00:02.211) 0:00:41.526 ******* 2026-04-08 00:58:29.217273 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.217279 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:58:29.217286 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:58:29.217293 | orchestrator | 2026-04-08 00:58:29.217302 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-08 00:58:29.217309 | orchestrator | Wednesday 08 April 2026 00:56:31 +0000 (0:00:00.383) 0:00:41.909 ******* 2026-04-08 00:58:29.217316 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:58:29.217322 | orchestrator | 2026-04-08 00:58:29.217329 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-08 00:58:29.217336 | orchestrator | Wednesday 08 April 2026 00:56:34 +0000 (0:00:02.316) 0:00:44.226 ******* 2026-04-08 00:58:29.217342 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:58:29.217349 | orchestrator | 2026-04-08 00:58:29.217356 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-08 00:58:29.217362 | orchestrator | Wednesday 08 April 2026 00:56:36 +0000 (0:00:02.464) 0:00:46.690 ******* 2026-04-08 00:58:29.217369 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:58:29.217375 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:58:29.217382 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:58:29.217389 | orchestrator | 2026-04-08 00:58:29.217396 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-08 00:58:29.217404 | orchestrator | Wednesday 08 April 2026 00:56:37 +0000 (0:00:00.812) 0:00:47.503 ******* 2026-04-08 00:58:29.217412 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:58:29.217419 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:58:29.217427 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:58:29.217434 | orchestrator | 2026-04-08 00:58:29.217442 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-08 00:58:29.217450 | orchestrator | Wednesday 08 April 2026 00:56:37 +0000 (0:00:00.280) 0:00:47.783 ******* 2026-04-08 00:58:29.217458 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.217465 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:58:29.217472 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:58:29.217479 | orchestrator | 2026-04-08 00:58:29.217485 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-08 00:58:29.217492 | orchestrator | Wednesday 08 April 2026 00:56:37 +0000 (0:00:00.286) 0:00:48.069 ******* 2026-04-08 00:58:29.217498 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:58:29.217505 | orchestrator | 2026-04-08 00:58:29.217512 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-08 00:58:29.217519 | orchestrator | Wednesday 08 April 2026 00:56:53 +0000 (0:00:15.923) 0:01:03.993 ******* 2026-04-08 00:58:29.217525 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:58:29.217532 | orchestrator | 2026-04-08 00:58:29.217539 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-08 00:58:29.217545 | orchestrator | Wednesday 08 April 2026 00:57:05 +0000 (0:00:12.009) 0:01:16.002 ******* 2026-04-08 00:58:29.217552 | orchestrator | 2026-04-08 00:58:29.217558 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-08 00:58:29.217565 | orchestrator | Wednesday 08 April 2026 00:57:05 +0000 (0:00:00.082) 0:01:16.085 ******* 2026-04-08 00:58:29.217572 | orchestrator | 2026-04-08 00:58:29.217578 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-08 00:58:29.217588 | orchestrator | Wednesday 08 April 2026 00:57:06 +0000 (0:00:00.063) 0:01:16.148 ******* 2026-04-08 00:58:29.217595 | orchestrator | 2026-04-08 00:58:29.217602 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-08 00:58:29.217608 | orchestrator | Wednesday 08 April 2026 00:57:06 +0000 (0:00:00.064) 0:01:16.213 ******* 2026-04-08 00:58:29.217615 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:58:29.217622 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:58:29.217628 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:58:29.217635 | orchestrator | 2026-04-08 00:58:29.217642 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-08 00:58:29.217648 | orchestrator | Wednesday 08 April 2026 00:57:18 +0000 (0:00:12.611) 0:01:28.825 ******* 2026-04-08 00:58:29.217655 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:58:29.217667 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:58:29.217673 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:58:29.217680 | orchestrator | 2026-04-08 00:58:29.217687 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-08 00:58:29.217693 | orchestrator | Wednesday 08 April 2026 00:57:23 +0000 (0:00:05.128) 0:01:33.954 ******* 2026-04-08 00:58:29.217700 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:58:29.217707 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:58:29.217713 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:58:29.217720 | orchestrator | 2026-04-08 00:58:29.217727 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-08 00:58:29.217733 | orchestrator | Wednesday 08 April 2026 00:57:30 +0000 (0:00:06.614) 0:01:40.569 ******* 2026-04-08 00:58:29.217740 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:58:29.217747 | orchestrator | 2026-04-08 00:58:29.217754 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-08 00:58:29.217760 | orchestrator | Wednesday 08 April 2026 00:57:30 +0000 (0:00:00.501) 0:01:41.071 ******* 2026-04-08 00:58:29.217767 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:58:29.217773 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:58:29.217780 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:58:29.217787 | orchestrator | 2026-04-08 00:58:29.217793 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-08 00:58:29.217800 | orchestrator | Wednesday 08 April 2026 00:57:31 +0000 (0:00:00.762) 0:01:41.834 ******* 2026-04-08 00:58:29.217810 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:58:29.217817 | orchestrator | 2026-04-08 00:58:29.217823 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-08 00:58:29.217830 | orchestrator | Wednesday 08 April 2026 00:57:33 +0000 (0:00:01.686) 0:01:43.520 ******* 2026-04-08 00:58:29.217837 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-08 00:58:29.217844 | orchestrator | 2026-04-08 00:58:29.217850 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-08 00:58:29.217857 | orchestrator | Wednesday 08 April 2026 00:57:46 +0000 (0:00:13.178) 0:01:56.699 ******* 2026-04-08 00:58:29.217864 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-08 00:58:29.217870 | orchestrator | 2026-04-08 00:58:29.217877 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-08 00:58:29.217884 | orchestrator | Wednesday 08 April 2026 00:58:14 +0000 (0:00:27.963) 0:02:24.662 ******* 2026-04-08 00:58:29.217890 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-08 00:58:29.217897 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-08 00:58:29.217904 | orchestrator | 2026-04-08 00:58:29.217910 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-08 00:58:29.217917 | orchestrator | Wednesday 08 April 2026 00:58:22 +0000 (0:00:08.397) 0:02:33.060 ******* 2026-04-08 00:58:29.217924 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.217930 | orchestrator | 2026-04-08 00:58:29.217937 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-08 00:58:29.217943 | orchestrator | Wednesday 08 April 2026 00:58:23 +0000 (0:00:00.273) 0:02:33.333 ******* 2026-04-08 00:58:29.217950 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.217957 | orchestrator | 2026-04-08 00:58:29.217963 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-08 00:58:29.217970 | orchestrator | Wednesday 08 April 2026 00:58:23 +0000 (0:00:00.190) 0:02:33.524 ******* 2026-04-08 00:58:29.217977 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.217983 | orchestrator | 2026-04-08 00:58:29.217990 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-08 00:58:29.217997 | orchestrator | Wednesday 08 April 2026 00:58:23 +0000 (0:00:00.169) 0:02:33.693 ******* 2026-04-08 00:58:29.218008 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.218044 | orchestrator | 2026-04-08 00:58:29.218051 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-08 00:58:29.218058 | orchestrator | Wednesday 08 April 2026 00:58:23 +0000 (0:00:00.400) 0:02:34.093 ******* 2026-04-08 00:58:29.218065 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:58:29.218072 | orchestrator | 2026-04-08 00:58:29.218078 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-08 00:58:29.218085 | orchestrator | Wednesday 08 April 2026 00:58:27 +0000 (0:00:03.983) 0:02:38.077 ******* 2026-04-08 00:58:29.218092 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:58:29.218098 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:58:29.218105 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:58:29.218112 | orchestrator | 2026-04-08 00:58:29.218132 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:58:29.218140 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-08 00:58:29.218152 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-08 00:58:29.218159 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-08 00:58:29.218166 | orchestrator | 2026-04-08 00:58:29.218172 | orchestrator | 2026-04-08 00:58:29.218179 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:58:29.218186 | orchestrator | Wednesday 08 April 2026 00:58:28 +0000 (0:00:00.744) 0:02:38.822 ******* 2026-04-08 00:58:29.218192 | orchestrator | =============================================================================== 2026-04-08 00:58:29.218199 | orchestrator | service-ks-register : keystone | Creating services --------------------- 27.96s 2026-04-08 00:58:29.218206 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.92s 2026-04-08 00:58:29.218212 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.18s 2026-04-08 00:58:29.218219 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 12.61s 2026-04-08 00:58:29.218226 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.01s 2026-04-08 00:58:29.218232 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.11s 2026-04-08 00:58:29.218239 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 8.40s 2026-04-08 00:58:29.218246 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.62s 2026-04-08 00:58:29.218252 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.33s 2026-04-08 00:58:29.218259 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.13s 2026-04-08 00:58:29.218265 | orchestrator | keystone : Creating default user role ----------------------------------- 3.98s 2026-04-08 00:58:29.218272 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.48s 2026-04-08 00:58:29.218279 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.23s 2026-04-08 00:58:29.218285 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.46s 2026-04-08 00:58:29.218296 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.41s 2026-04-08 00:58:29.218303 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.32s 2026-04-08 00:58:29.218310 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.21s 2026-04-08 00:58:29.218317 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.11s 2026-04-08 00:58:29.218323 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.69s 2026-04-08 00:58:29.218330 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.51s 2026-04-08 00:58:29.218342 | orchestrator | 2026-04-08 00:58:29 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:58:29.218349 | orchestrator | 2026-04-08 00:58:29 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:29.218362 | orchestrator | 2026-04-08 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:32.264011 | orchestrator | 2026-04-08 00:58:32 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:32.266223 | orchestrator | 2026-04-08 00:58:32 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:58:32.268513 | orchestrator | 2026-04-08 00:58:32 | INFO  | Task 53a4ce7a-74a4-4954-8766-5a3579527e45 is in state STARTED 2026-04-08 00:58:32.272218 | orchestrator | 2026-04-08 00:58:32 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:58:32.323859 | orchestrator | 2026-04-08 00:58:32 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:32.323944 | orchestrator | 2026-04-08 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:35.301772 | orchestrator | 2026-04-08 00:58:35 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:35.301967 | orchestrator | 2026-04-08 00:58:35 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:58:35.302710 | orchestrator | 2026-04-08 00:58:35 | INFO  | Task 53a4ce7a-74a4-4954-8766-5a3579527e45 is in state STARTED 2026-04-08 00:58:35.303521 | orchestrator | 2026-04-08 00:58:35 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:58:35.304299 | orchestrator | 2026-04-08 00:58:35 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:35.304331 | orchestrator | 2026-04-08 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:38.349011 | orchestrator | 2026-04-08 00:58:38 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:38.352307 | orchestrator | 2026-04-08 00:58:38 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:58:38.354845 | orchestrator | 2026-04-08 00:58:38 | INFO  | Task 53a4ce7a-74a4-4954-8766-5a3579527e45 is in state STARTED 2026-04-08 00:58:38.357566 | orchestrator | 2026-04-08 00:58:38 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:58:38.360608 | orchestrator | 2026-04-08 00:58:38 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:38.360661 | orchestrator | 2026-04-08 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:41.414903 | orchestrator | 2026-04-08 00:58:41 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:41.415237 | orchestrator | 2026-04-08 00:58:41 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:58:41.417158 | orchestrator | 2026-04-08 00:58:41 | INFO  | Task 53a4ce7a-74a4-4954-8766-5a3579527e45 is in state STARTED 2026-04-08 00:58:41.418238 | orchestrator | 2026-04-08 00:58:41 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:58:41.418844 | orchestrator | 2026-04-08 00:58:41 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:41.418880 | orchestrator | 2026-04-08 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:44.450746 | orchestrator | 2026-04-08 00:58:44 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:44.451343 | orchestrator | 2026-04-08 00:58:44 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:58:44.451404 | orchestrator | 2026-04-08 00:58:44 | INFO  | Task 53a4ce7a-74a4-4954-8766-5a3579527e45 is in state STARTED 2026-04-08 00:58:44.451724 | orchestrator | 2026-04-08 00:58:44 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:58:44.452458 | orchestrator | 2026-04-08 00:58:44 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:44.452501 | orchestrator | 2026-04-08 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:47.617049 | orchestrator | 2026-04-08 00:58:47 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:47.617278 | orchestrator | 2026-04-08 00:58:47 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:58:47.617454 | orchestrator | 2026-04-08 00:58:47 | INFO  | Task 53a4ce7a-74a4-4954-8766-5a3579527e45 is in state STARTED 2026-04-08 00:58:47.618180 | orchestrator | 2026-04-08 00:58:47 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:58:47.618589 | orchestrator | 2026-04-08 00:58:47 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:47.618636 | orchestrator | 2026-04-08 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:50.652074 | orchestrator | 2026-04-08 00:58:50 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:50.652340 | orchestrator | 2026-04-08 00:58:50 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:58:50.653558 | orchestrator | 2026-04-08 00:58:50 | INFO  | Task 53a4ce7a-74a4-4954-8766-5a3579527e45 is in state STARTED 2026-04-08 00:58:50.654099 | orchestrator | 2026-04-08 00:58:50 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:58:50.654618 | orchestrator | 2026-04-08 00:58:50 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:50.654651 | orchestrator | 2026-04-08 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:53.720067 | orchestrator | 2026-04-08 00:58:53 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:53.720195 | orchestrator | 2026-04-08 00:58:53 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:58:53.720203 | orchestrator | 2026-04-08 00:58:53 | INFO  | Task 53a4ce7a-74a4-4954-8766-5a3579527e45 is in state STARTED 2026-04-08 00:58:53.720543 | orchestrator | 2026-04-08 00:58:53 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:58:53.721140 | orchestrator | 2026-04-08 00:58:53 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:53.721165 | orchestrator | 2026-04-08 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:56.745024 | orchestrator | 2026-04-08 00:58:56 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:56.745358 | orchestrator | 2026-04-08 00:58:56 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:58:56.745945 | orchestrator | 2026-04-08 00:58:56 | INFO  | Task 53a4ce7a-74a4-4954-8766-5a3579527e45 is in state STARTED 2026-04-08 00:58:56.746775 | orchestrator | 2026-04-08 00:58:56 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:58:56.747369 | orchestrator | 2026-04-08 00:58:56 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:56.747406 | orchestrator | 2026-04-08 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:59.770461 | orchestrator | 2026-04-08 00:58:59 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state STARTED 2026-04-08 00:58:59.772579 | orchestrator | 2026-04-08 00:58:59 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:58:59.772658 | orchestrator | 2026-04-08 00:58:59 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:58:59.773544 | orchestrator | 2026-04-08 00:58:59 | INFO  | Task 53a4ce7a-74a4-4954-8766-5a3579527e45 is in state SUCCESS 2026-04-08 00:58:59.773967 | orchestrator | 2026-04-08 00:58:59 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:58:59.775874 | orchestrator | 2026-04-08 00:58:59 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:58:59.775913 | orchestrator | 2026-04-08 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:02.803777 | orchestrator | 2026-04-08 00:59:02.803881 | orchestrator | 2026-04-08 00:59:02.803895 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:59:02.803904 | orchestrator | 2026-04-08 00:59:02.803912 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:59:02.803920 | orchestrator | Wednesday 08 April 2026 00:58:19 +0000 (0:00:00.386) 0:00:00.386 ******* 2026-04-08 00:59:02.803928 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:59:02.803936 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:59:02.803944 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:59:02.803966 | orchestrator | ok: [testbed-manager] 2026-04-08 00:59:02.803973 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:59:02.803981 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:59:02.803988 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:59:02.803996 | orchestrator | 2026-04-08 00:59:02.804008 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:59:02.804019 | orchestrator | Wednesday 08 April 2026 00:58:20 +0000 (0:00:00.759) 0:00:01.145 ******* 2026-04-08 00:59:02.804037 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-08 00:59:02.804051 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-08 00:59:02.804063 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-08 00:59:02.804075 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-08 00:59:02.804087 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-08 00:59:02.804123 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-08 00:59:02.804137 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-08 00:59:02.804149 | orchestrator | 2026-04-08 00:59:02.804161 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-08 00:59:02.804173 | orchestrator | 2026-04-08 00:59:02.804185 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-08 00:59:02.804197 | orchestrator | Wednesday 08 April 2026 00:58:21 +0000 (0:00:00.873) 0:00:02.019 ******* 2026-04-08 00:59:02.804211 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:59:02.804226 | orchestrator | 2026-04-08 00:59:02.804239 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-08 00:59:02.804250 | orchestrator | Wednesday 08 April 2026 00:58:23 +0000 (0:00:02.158) 0:00:04.177 ******* 2026-04-08 00:59:02.804257 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-04-08 00:59:02.804265 | orchestrator | 2026-04-08 00:59:02.804272 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-08 00:59:02.804280 | orchestrator | Wednesday 08 April 2026 00:58:28 +0000 (0:00:04.749) 0:00:08.927 ******* 2026-04-08 00:59:02.804289 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-08 00:59:02.804322 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-08 00:59:02.804347 | orchestrator | 2026-04-08 00:59:02.804355 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-08 00:59:02.804372 | orchestrator | Wednesday 08 April 2026 00:58:35 +0000 (0:00:07.272) 0:00:16.200 ******* 2026-04-08 00:59:02.804381 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-08 00:59:02.804390 | orchestrator | 2026-04-08 00:59:02.804398 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-08 00:59:02.804410 | orchestrator | Wednesday 08 April 2026 00:58:39 +0000 (0:00:03.757) 0:00:19.957 ******* 2026-04-08 00:59:02.804423 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-04-08 00:59:02.804431 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-08 00:59:02.804439 | orchestrator | 2026-04-08 00:59:02.804448 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-08 00:59:02.804457 | orchestrator | Wednesday 08 April 2026 00:58:44 +0000 (0:00:05.080) 0:00:25.038 ******* 2026-04-08 00:59:02.804465 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-08 00:59:02.804474 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-04-08 00:59:02.804482 | orchestrator | 2026-04-08 00:59:02.804491 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-08 00:59:02.804500 | orchestrator | Wednesday 08 April 2026 00:58:51 +0000 (0:00:07.120) 0:00:32.158 ******* 2026-04-08 00:59:02.804508 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-04-08 00:59:02.804517 | orchestrator | 2026-04-08 00:59:02.804525 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:59:02.804534 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:59:02.804542 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:59:02.804555 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:59:02.804567 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:59:02.804578 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:59:02.804612 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:59:02.804624 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:59:02.804634 | orchestrator | 2026-04-08 00:59:02.804645 | orchestrator | 2026-04-08 00:59:02.804655 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:59:02.804667 | orchestrator | Wednesday 08 April 2026 00:58:56 +0000 (0:00:04.974) 0:00:37.133 ******* 2026-04-08 00:59:02.804687 | orchestrator | =============================================================================== 2026-04-08 00:59:02.804700 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.27s 2026-04-08 00:59:02.804712 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.12s 2026-04-08 00:59:02.804723 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 5.08s 2026-04-08 00:59:02.804735 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.97s 2026-04-08 00:59:02.804747 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.75s 2026-04-08 00:59:02.804770 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.76s 2026-04-08 00:59:02.804782 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.16s 2026-04-08 00:59:02.804794 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2026-04-08 00:59:02.804805 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.76s 2026-04-08 00:59:02.804817 | orchestrator | 2026-04-08 00:59:02.804829 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-08 00:59:02.804841 | orchestrator | 2.16.14 2026-04-08 00:59:02.804870 | orchestrator | 2026-04-08 00:59:02.804881 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-08 00:59:02.804888 | orchestrator | 2026-04-08 00:59:02.804904 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-08 00:59:02.804912 | orchestrator | Wednesday 08 April 2026 00:58:13 +0000 (0:00:00.323) 0:00:00.323 ******* 2026-04-08 00:59:02.804919 | orchestrator | changed: [testbed-manager] 2026-04-08 00:59:02.804926 | orchestrator | 2026-04-08 00:59:02.804933 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-08 00:59:02.804940 | orchestrator | Wednesday 08 April 2026 00:58:14 +0000 (0:00:01.580) 0:00:01.904 ******* 2026-04-08 00:59:02.804947 | orchestrator | changed: [testbed-manager] 2026-04-08 00:59:02.804955 | orchestrator | 2026-04-08 00:59:02.804962 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-08 00:59:02.804969 | orchestrator | Wednesday 08 April 2026 00:58:15 +0000 (0:00:01.153) 0:00:03.058 ******* 2026-04-08 00:59:02.804976 | orchestrator | changed: [testbed-manager] 2026-04-08 00:59:02.804983 | orchestrator | 2026-04-08 00:59:02.804990 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-08 00:59:02.804998 | orchestrator | Wednesday 08 April 2026 00:58:17 +0000 (0:00:01.470) 0:00:04.528 ******* 2026-04-08 00:59:02.805005 | orchestrator | changed: [testbed-manager] 2026-04-08 00:59:02.805012 | orchestrator | 2026-04-08 00:59:02.805019 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-08 00:59:02.805026 | orchestrator | Wednesday 08 April 2026 00:58:18 +0000 (0:00:01.297) 0:00:05.826 ******* 2026-04-08 00:59:02.805034 | orchestrator | changed: [testbed-manager] 2026-04-08 00:59:02.805041 | orchestrator | 2026-04-08 00:59:02.805219 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-08 00:59:02.805229 | orchestrator | Wednesday 08 April 2026 00:58:19 +0000 (0:00:00.903) 0:00:06.729 ******* 2026-04-08 00:59:02.805236 | orchestrator | changed: [testbed-manager] 2026-04-08 00:59:02.805244 | orchestrator | 2026-04-08 00:59:02.805251 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-08 00:59:02.805258 | orchestrator | Wednesday 08 April 2026 00:58:20 +0000 (0:00:00.979) 0:00:07.709 ******* 2026-04-08 00:59:02.805265 | orchestrator | changed: [testbed-manager] 2026-04-08 00:59:02.805272 | orchestrator | 2026-04-08 00:59:02.805280 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-08 00:59:02.805287 | orchestrator | Wednesday 08 April 2026 00:58:21 +0000 (0:00:01.267) 0:00:08.976 ******* 2026-04-08 00:59:02.805294 | orchestrator | changed: [testbed-manager] 2026-04-08 00:59:02.805301 | orchestrator | 2026-04-08 00:59:02.805308 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-08 00:59:02.805316 | orchestrator | Wednesday 08 April 2026 00:58:23 +0000 (0:00:01.177) 0:00:10.154 ******* 2026-04-08 00:59:02.805323 | orchestrator | changed: [testbed-manager] 2026-04-08 00:59:02.805330 | orchestrator | 2026-04-08 00:59:02.805337 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-08 00:59:02.805344 | orchestrator | Wednesday 08 April 2026 00:58:36 +0000 (0:00:13.544) 0:00:23.698 ******* 2026-04-08 00:59:02.805351 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:59:02.805358 | orchestrator | 2026-04-08 00:59:02.805366 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-08 00:59:02.805373 | orchestrator | 2026-04-08 00:59:02.805388 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-08 00:59:02.805396 | orchestrator | Wednesday 08 April 2026 00:58:36 +0000 (0:00:00.174) 0:00:23.873 ******* 2026-04-08 00:59:02.805403 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:59:02.805410 | orchestrator | 2026-04-08 00:59:02.805417 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-08 00:59:02.805424 | orchestrator | 2026-04-08 00:59:02.805431 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-08 00:59:02.805439 | orchestrator | Wednesday 08 April 2026 00:58:38 +0000 (0:00:01.925) 0:00:25.798 ******* 2026-04-08 00:59:02.805446 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:59:02.805453 | orchestrator | 2026-04-08 00:59:02.805460 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-08 00:59:02.805467 | orchestrator | 2026-04-08 00:59:02.805475 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-08 00:59:02.805491 | orchestrator | Wednesday 08 April 2026 00:58:50 +0000 (0:00:11.609) 0:00:37.407 ******* 2026-04-08 00:59:02.805499 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:59:02.805506 | orchestrator | 2026-04-08 00:59:02.805513 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:59:02.805521 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:59:02.805603 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:59:02.805612 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:59:02.805619 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:59:02.805627 | orchestrator | 2026-04-08 00:59:02.805634 | orchestrator | 2026-04-08 00:59:02.805641 | orchestrator | 2026-04-08 00:59:02.805648 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:59:02.805655 | orchestrator | Wednesday 08 April 2026 00:59:01 +0000 (0:00:11.481) 0:00:48.889 ******* 2026-04-08 00:59:02.805663 | orchestrator | =============================================================================== 2026-04-08 00:59:02.805670 | orchestrator | Restart ceph manager service ------------------------------------------- 25.02s 2026-04-08 00:59:02.805677 | orchestrator | Create admin user ------------------------------------------------------ 13.54s 2026-04-08 00:59:02.805684 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.58s 2026-04-08 00:59:02.805691 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.47s 2026-04-08 00:59:02.805699 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.30s 2026-04-08 00:59:02.805706 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.27s 2026-04-08 00:59:02.805713 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.18s 2026-04-08 00:59:02.805720 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.15s 2026-04-08 00:59:02.805727 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.98s 2026-04-08 00:59:02.805735 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.90s 2026-04-08 00:59:02.805742 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2026-04-08 00:59:02.805749 | orchestrator | 2026-04-08 00:59:02 | INFO  | Task dc4bf99f-4728-4d7e-865d-d1001659dea2 is in state SUCCESS 2026-04-08 00:59:02.805757 | orchestrator | 2026-04-08 00:59:02 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:02.805768 | orchestrator | 2026-04-08 00:59:02 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:02.806412 | orchestrator | 2026-04-08 00:59:02 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:02.807788 | orchestrator | 2026-04-08 00:59:02 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:02.807888 | orchestrator | 2026-04-08 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:05.838228 | orchestrator | 2026-04-08 00:59:05 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:05.839264 | orchestrator | 2026-04-08 00:59:05 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:05.840019 | orchestrator | 2026-04-08 00:59:05 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:05.841247 | orchestrator | 2026-04-08 00:59:05 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:05.841281 | orchestrator | 2026-04-08 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:08.866928 | orchestrator | 2026-04-08 00:59:08 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:08.867475 | orchestrator | 2026-04-08 00:59:08 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:08.869880 | orchestrator | 2026-04-08 00:59:08 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:08.870390 | orchestrator | 2026-04-08 00:59:08 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:08.870420 | orchestrator | 2026-04-08 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:11.892111 | orchestrator | 2026-04-08 00:59:11 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:11.892260 | orchestrator | 2026-04-08 00:59:11 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:11.892735 | orchestrator | 2026-04-08 00:59:11 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:11.893391 | orchestrator | 2026-04-08 00:59:11 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:11.893412 | orchestrator | 2026-04-08 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:14.915878 | orchestrator | 2026-04-08 00:59:14 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:14.915986 | orchestrator | 2026-04-08 00:59:14 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:14.916463 | orchestrator | 2026-04-08 00:59:14 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:14.917119 | orchestrator | 2026-04-08 00:59:14 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:14.917172 | orchestrator | 2026-04-08 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:17.941186 | orchestrator | 2026-04-08 00:59:17 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:17.942643 | orchestrator | 2026-04-08 00:59:17 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:17.943380 | orchestrator | 2026-04-08 00:59:17 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:17.945162 | orchestrator | 2026-04-08 00:59:17 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:17.946234 | orchestrator | 2026-04-08 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:20.976380 | orchestrator | 2026-04-08 00:59:20 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:20.977023 | orchestrator | 2026-04-08 00:59:20 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:20.977858 | orchestrator | 2026-04-08 00:59:20 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:20.978882 | orchestrator | 2026-04-08 00:59:20 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:20.978915 | orchestrator | 2026-04-08 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:24.042406 | orchestrator | 2026-04-08 00:59:24 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:24.042629 | orchestrator | 2026-04-08 00:59:24 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:24.043673 | orchestrator | 2026-04-08 00:59:24 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:24.044339 | orchestrator | 2026-04-08 00:59:24 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:24.044393 | orchestrator | 2026-04-08 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:27.064382 | orchestrator | 2026-04-08 00:59:27 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:27.065772 | orchestrator | 2026-04-08 00:59:27 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:27.066656 | orchestrator | 2026-04-08 00:59:27 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:27.067614 | orchestrator | 2026-04-08 00:59:27 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:27.067657 | orchestrator | 2026-04-08 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:30.089531 | orchestrator | 2026-04-08 00:59:30 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:30.089624 | orchestrator | 2026-04-08 00:59:30 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:30.091263 | orchestrator | 2026-04-08 00:59:30 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:30.092344 | orchestrator | 2026-04-08 00:59:30 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:30.092380 | orchestrator | 2026-04-08 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:33.112000 | orchestrator | 2026-04-08 00:59:33 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:33.112203 | orchestrator | 2026-04-08 00:59:33 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:33.112801 | orchestrator | 2026-04-08 00:59:33 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:33.113651 | orchestrator | 2026-04-08 00:59:33 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:33.113670 | orchestrator | 2026-04-08 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:36.139783 | orchestrator | 2026-04-08 00:59:36 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:36.139928 | orchestrator | 2026-04-08 00:59:36 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:36.140829 | orchestrator | 2026-04-08 00:59:36 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:36.141585 | orchestrator | 2026-04-08 00:59:36 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:36.141638 | orchestrator | 2026-04-08 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:39.176655 | orchestrator | 2026-04-08 00:59:39 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:39.178595 | orchestrator | 2026-04-08 00:59:39 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:39.178912 | orchestrator | 2026-04-08 00:59:39 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:39.179987 | orchestrator | 2026-04-08 00:59:39 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:39.180023 | orchestrator | 2026-04-08 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:42.208551 | orchestrator | 2026-04-08 00:59:42 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:42.208626 | orchestrator | 2026-04-08 00:59:42 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:42.208633 | orchestrator | 2026-04-08 00:59:42 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:42.211442 | orchestrator | 2026-04-08 00:59:42 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:42.211519 | orchestrator | 2026-04-08 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:45.250576 | orchestrator | 2026-04-08 00:59:45 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:45.250950 | orchestrator | 2026-04-08 00:59:45 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:45.251690 | orchestrator | 2026-04-08 00:59:45 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:45.252467 | orchestrator | 2026-04-08 00:59:45 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:45.252551 | orchestrator | 2026-04-08 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:48.284634 | orchestrator | 2026-04-08 00:59:48 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:48.284798 | orchestrator | 2026-04-08 00:59:48 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:48.286533 | orchestrator | 2026-04-08 00:59:48 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:48.287332 | orchestrator | 2026-04-08 00:59:48 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:48.287382 | orchestrator | 2026-04-08 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:51.349508 | orchestrator | 2026-04-08 00:59:51 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:51.352873 | orchestrator | 2026-04-08 00:59:51 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:51.355137 | orchestrator | 2026-04-08 00:59:51 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:51.357391 | orchestrator | 2026-04-08 00:59:51 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:51.357450 | orchestrator | 2026-04-08 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:54.392632 | orchestrator | 2026-04-08 00:59:54 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:54.392927 | orchestrator | 2026-04-08 00:59:54 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:54.394319 | orchestrator | 2026-04-08 00:59:54 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:54.395316 | orchestrator | 2026-04-08 00:59:54 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:54.395351 | orchestrator | 2026-04-08 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:57.465016 | orchestrator | 2026-04-08 00:59:57 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 00:59:57.465500 | orchestrator | 2026-04-08 00:59:57 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 00:59:57.466342 | orchestrator | 2026-04-08 00:59:57 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 00:59:57.467213 | orchestrator | 2026-04-08 00:59:57 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 00:59:57.467242 | orchestrator | 2026-04-08 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:00.520729 | orchestrator | 2026-04-08 01:00:00 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:00.521020 | orchestrator | 2026-04-08 01:00:00 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:00.523500 | orchestrator | 2026-04-08 01:00:00 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:00.525661 | orchestrator | 2026-04-08 01:00:00 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:00.526409 | orchestrator | 2026-04-08 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:03.570855 | orchestrator | 2026-04-08 01:00:03 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:03.571614 | orchestrator | 2026-04-08 01:00:03 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:03.572523 | orchestrator | 2026-04-08 01:00:03 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:03.574970 | orchestrator | 2026-04-08 01:00:03 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:03.575007 | orchestrator | 2026-04-08 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:06.623635 | orchestrator | 2026-04-08 01:00:06 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:06.623713 | orchestrator | 2026-04-08 01:00:06 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:06.624692 | orchestrator | 2026-04-08 01:00:06 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:06.625462 | orchestrator | 2026-04-08 01:00:06 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:06.625514 | orchestrator | 2026-04-08 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:09.669603 | orchestrator | 2026-04-08 01:00:09 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:09.669665 | orchestrator | 2026-04-08 01:00:09 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:09.669675 | orchestrator | 2026-04-08 01:00:09 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:09.669682 | orchestrator | 2026-04-08 01:00:09 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:09.671816 | orchestrator | 2026-04-08 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:12.713980 | orchestrator | 2026-04-08 01:00:12 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:12.714759 | orchestrator | 2026-04-08 01:00:12 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:12.716596 | orchestrator | 2026-04-08 01:00:12 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:12.718370 | orchestrator | 2026-04-08 01:00:12 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:12.718413 | orchestrator | 2026-04-08 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:15.755895 | orchestrator | 2026-04-08 01:00:15 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:15.758266 | orchestrator | 2026-04-08 01:00:15 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:15.761670 | orchestrator | 2026-04-08 01:00:15 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:15.764349 | orchestrator | 2026-04-08 01:00:15 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:15.764394 | orchestrator | 2026-04-08 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:18.812736 | orchestrator | 2026-04-08 01:00:18 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:18.813099 | orchestrator | 2026-04-08 01:00:18 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:18.814207 | orchestrator | 2026-04-08 01:00:18 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:18.815574 | orchestrator | 2026-04-08 01:00:18 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:18.815611 | orchestrator | 2026-04-08 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:21.860738 | orchestrator | 2026-04-08 01:00:21 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:21.862088 | orchestrator | 2026-04-08 01:00:21 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:21.863548 | orchestrator | 2026-04-08 01:00:21 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:21.865164 | orchestrator | 2026-04-08 01:00:21 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:21.865204 | orchestrator | 2026-04-08 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:24.910863 | orchestrator | 2026-04-08 01:00:24 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:24.912526 | orchestrator | 2026-04-08 01:00:24 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:24.915431 | orchestrator | 2026-04-08 01:00:24 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:24.916806 | orchestrator | 2026-04-08 01:00:24 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:24.916881 | orchestrator | 2026-04-08 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:27.970966 | orchestrator | 2026-04-08 01:00:27 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:27.971405 | orchestrator | 2026-04-08 01:00:27 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:27.973746 | orchestrator | 2026-04-08 01:00:27 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:27.976126 | orchestrator | 2026-04-08 01:00:27 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:27.976193 | orchestrator | 2026-04-08 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:31.041678 | orchestrator | 2026-04-08 01:00:31 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:31.041763 | orchestrator | 2026-04-08 01:00:31 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:31.041778 | orchestrator | 2026-04-08 01:00:31 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:31.041789 | orchestrator | 2026-04-08 01:00:31 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:31.041800 | orchestrator | 2026-04-08 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:34.062633 | orchestrator | 2026-04-08 01:00:34 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:34.062797 | orchestrator | 2026-04-08 01:00:34 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:34.064677 | orchestrator | 2026-04-08 01:00:34 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:34.066602 | orchestrator | 2026-04-08 01:00:34 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:34.066667 | orchestrator | 2026-04-08 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:37.112807 | orchestrator | 2026-04-08 01:00:37 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:37.115462 | orchestrator | 2026-04-08 01:00:37 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:37.117206 | orchestrator | 2026-04-08 01:00:37 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:37.120122 | orchestrator | 2026-04-08 01:00:37 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:37.120166 | orchestrator | 2026-04-08 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:40.165668 | orchestrator | 2026-04-08 01:00:40 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:40.166289 | orchestrator | 2026-04-08 01:00:40 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:40.167972 | orchestrator | 2026-04-08 01:00:40 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:40.169664 | orchestrator | 2026-04-08 01:00:40 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:40.169706 | orchestrator | 2026-04-08 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:43.217138 | orchestrator | 2026-04-08 01:00:43 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:43.217472 | orchestrator | 2026-04-08 01:00:43 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:43.218208 | orchestrator | 2026-04-08 01:00:43 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:43.218927 | orchestrator | 2026-04-08 01:00:43 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:43.218942 | orchestrator | 2026-04-08 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:46.263859 | orchestrator | 2026-04-08 01:00:46 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:46.265413 | orchestrator | 2026-04-08 01:00:46 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:46.266689 | orchestrator | 2026-04-08 01:00:46 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:46.268560 | orchestrator | 2026-04-08 01:00:46 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:46.268994 | orchestrator | 2026-04-08 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:49.325196 | orchestrator | 2026-04-08 01:00:49 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:49.326978 | orchestrator | 2026-04-08 01:00:49 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:49.328899 | orchestrator | 2026-04-08 01:00:49 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:49.329852 | orchestrator | 2026-04-08 01:00:49 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:49.329894 | orchestrator | 2026-04-08 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:52.375455 | orchestrator | 2026-04-08 01:00:52 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:52.377466 | orchestrator | 2026-04-08 01:00:52 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:52.379275 | orchestrator | 2026-04-08 01:00:52 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:52.379909 | orchestrator | 2026-04-08 01:00:52 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:52.379928 | orchestrator | 2026-04-08 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:55.402454 | orchestrator | 2026-04-08 01:00:55 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:55.402622 | orchestrator | 2026-04-08 01:00:55 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:55.403454 | orchestrator | 2026-04-08 01:00:55 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:55.403940 | orchestrator | 2026-04-08 01:00:55 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:55.403988 | orchestrator | 2026-04-08 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:58.445467 | orchestrator | 2026-04-08 01:00:58 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:00:58.451185 | orchestrator | 2026-04-08 01:00:58 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:00:58.452085 | orchestrator | 2026-04-08 01:00:58 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:00:58.455476 | orchestrator | 2026-04-08 01:00:58 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:00:58.455552 | orchestrator | 2026-04-08 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:01.497311 | orchestrator | 2026-04-08 01:01:01 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:01.497396 | orchestrator | 2026-04-08 01:01:01 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:01.497405 | orchestrator | 2026-04-08 01:01:01 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:01:01.497427 | orchestrator | 2026-04-08 01:01:01 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:01:01.497434 | orchestrator | 2026-04-08 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:04.522244 | orchestrator | 2026-04-08 01:01:04 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:04.522372 | orchestrator | 2026-04-08 01:01:04 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:04.522753 | orchestrator | 2026-04-08 01:01:04 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:01:04.524618 | orchestrator | 2026-04-08 01:01:04 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:01:04.524700 | orchestrator | 2026-04-08 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:07.563462 | orchestrator | 2026-04-08 01:01:07 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:07.563934 | orchestrator | 2026-04-08 01:01:07 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:07.564921 | orchestrator | 2026-04-08 01:01:07 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:01:07.565700 | orchestrator | 2026-04-08 01:01:07 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:01:07.565745 | orchestrator | 2026-04-08 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:10.609246 | orchestrator | 2026-04-08 01:01:10 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:10.611281 | orchestrator | 2026-04-08 01:01:10 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:10.612639 | orchestrator | 2026-04-08 01:01:10 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:01:10.613782 | orchestrator | 2026-04-08 01:01:10 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state STARTED 2026-04-08 01:01:10.614086 | orchestrator | 2026-04-08 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:13.675082 | orchestrator | 2026-04-08 01:01:13 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:13.675169 | orchestrator | 2026-04-08 01:01:13 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:13.676096 | orchestrator | 2026-04-08 01:01:13 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:13.676775 | orchestrator | 2026-04-08 01:01:13 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:01:13.681724 | orchestrator | 2026-04-08 01:01:13 | INFO  | Task 0d7964e2-cfa8-4cd2-a403-b51d91ad6aee is in state SUCCESS 2026-04-08 01:01:13.682622 | orchestrator | 2026-04-08 01:01:13.684493 | orchestrator | 2026-04-08 01:01:13.684531 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 01:01:13.684540 | orchestrator | 2026-04-08 01:01:13.684547 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 01:01:13.684555 | orchestrator | Wednesday 08 April 2026 00:58:12 +0000 (0:00:00.380) 0:00:00.380 ******* 2026-04-08 01:01:13.684561 | orchestrator | ok: [testbed-manager] 2026-04-08 01:01:13.684569 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:01:13.684575 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:01:13.684582 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:01:13.684589 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:01:13.684596 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:01:13.684603 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:01:13.684609 | orchestrator | 2026-04-08 01:01:13.684616 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 01:01:13.684623 | orchestrator | Wednesday 08 April 2026 00:58:12 +0000 (0:00:00.752) 0:00:01.132 ******* 2026-04-08 01:01:13.684630 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-08 01:01:13.684638 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-08 01:01:13.684659 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-08 01:01:13.684667 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-08 01:01:13.684673 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-08 01:01:13.684680 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-08 01:01:13.684687 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-08 01:01:13.684693 | orchestrator | 2026-04-08 01:01:13.684700 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-08 01:01:13.684707 | orchestrator | 2026-04-08 01:01:13.684713 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-08 01:01:13.684720 | orchestrator | Wednesday 08 April 2026 00:58:13 +0000 (0:00:00.988) 0:00:02.120 ******* 2026-04-08 01:01:13.684727 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 01:01:13.684735 | orchestrator | 2026-04-08 01:01:13.684741 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-08 01:01:13.684756 | orchestrator | Wednesday 08 April 2026 00:58:15 +0000 (0:00:01.312) 0:00:03.433 ******* 2026-04-08 01:01:13.684765 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-08 01:01:13.684774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.684781 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.684788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.684803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.684816 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.684822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.684832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.684839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.684846 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-08 01:01:13.684855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.684865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.684875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.684881 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.684987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.684995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.685003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.685265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.685278 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.685299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.685306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.685313 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.685323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.685330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.685337 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.685343 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.685350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.685365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.685371 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.685377 | orchestrator | 2026-04-08 01:01:13.685384 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-08 01:01:13.685391 | orchestrator | Wednesday 08 April 2026 00:58:19 +0000 (0:00:04.419) 0:00:07.853 ******* 2026-04-08 01:01:13.685398 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 01:01:13.685405 | orchestrator | 2026-04-08 01:01:13.685410 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-08 01:01:13.685416 | orchestrator | Wednesday 08 April 2026 00:58:21 +0000 (0:00:01.374) 0:00:09.228 ******* 2026-04-08 01:01:13.685425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.685432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.685439 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-08 01:01:13.685445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.685461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.685467 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.685474 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.685484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.686001 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.686177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.686188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.686202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.686216 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.686224 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.686231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.686242 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.686250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.686258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.686264 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.686277 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.686290 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-08 01:01:13.686300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.686308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.686315 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.686322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.686333 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.686350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.686369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.686377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.686383 | orchestrator | 2026-04-08 01:01:13.686389 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-08 01:01:13.686396 | orchestrator | Wednesday 08 April 2026 00:58:27 +0000 (0:00:06.185) 0:00:15.413 ******* 2026-04-08 01:01:13.686421 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-08 01:01:13.686430 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 01:01:13.686436 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686447 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-08 01:01:13.686460 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686467 | orchestrator | skipping: [testbed-manager] 2026-04-08 01:01:13.686474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 01:01:13.686481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 01:01:13.686523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686554 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:13.686560 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:13.686570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 01:01:13.686577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686595 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:01:13.686602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 01:01:13.686610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686646 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:13.686656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 01:01:13.686668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686684 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:01:13.686692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 01:01:13.686699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686719 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:01:13.686726 | orchestrator | 2026-04-08 01:01:13.686734 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-08 01:01:13.686741 | orchestrator | Wednesday 08 April 2026 00:58:28 +0000 (0:00:01.568) 0:00:16.981 ******* 2026-04-08 01:01:13.686748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 01:01:13.686756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686797 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-08 01:01:13.686809 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 01:01:13.686817 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686827 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-08 01:01:13.686841 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 01:01:13.686856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 01:01:13.686908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686917 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:13.686926 | orchestrator | skipping: [testbed-manager] 2026-04-08 01:01:13.686933 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:13.686941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 01:01:13.686963 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:13.686975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 01:01:13.686981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.686999 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:01:13.687022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 01:01:13.687030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.687037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.687044 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:01:13.687051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 01:01:13.687058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.687069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 01:01:13.687076 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:01:13.687083 | orchestrator | 2026-04-08 01:01:13.687095 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-08 01:01:13.687103 | orchestrator | Wednesday 08 April 2026 00:58:30 +0000 (0:00:02.232) 0:00:19.214 ******* 2026-04-08 01:01:13.687110 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-08 01:01:13.687120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.687127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.687134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.687142 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.687150 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.687162 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.687173 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.687181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.687188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.687198 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.687206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.687213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.687221 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.687232 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.687246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.687254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.687265 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-08 01:01:13.687274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.687281 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.687288 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.687303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.687310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.687317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.687327 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.687334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.687341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.687346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.687353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.687366 | orchestrator | 2026-04-08 01:01:13.687373 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-08 01:01:13.687380 | orchestrator | Wednesday 08 April 2026 00:58:37 +0000 (0:00:06.615) 0:00:25.830 ******* 2026-04-08 01:01:13.687386 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 01:01:13.687394 | orchestrator | 2026-04-08 01:01:13.687401 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-08 01:01:13.687412 | orchestrator | Wednesday 08 April 2026 00:58:38 +0000 (0:00:00.922) 0:00:26.752 ******* 2026-04-08 01:01:13.687419 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084949, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9804542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687428 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084949, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9804542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687437 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084949, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9804542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687445 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084949, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9804542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.687452 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084949, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9804542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687459 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1084992, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.990185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687474 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084949, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9804542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687481 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1084992, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.990185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687488 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084949, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9804542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687498 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1084992, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.990185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687506 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1084992, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.990185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687513 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1084992, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.990185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687525 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1084992, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.990185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687809 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1084938, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9781513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687825 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1084938, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9781513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687833 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1084938, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9781513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687844 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1084938, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9781513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687851 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1084938, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9781513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687858 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1084938, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9781513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687871 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084970, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9853122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687899 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084970, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9853122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687907 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084970, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9853122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687915 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1084992, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.990185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.687925 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084970, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9853122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687932 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084970, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9853122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687939 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084931, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.976337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687952 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084931, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.976337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687960 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084970, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9853122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687984 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084931, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.976337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.687992 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084953, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9810748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688003 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084953, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9810748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688047 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084931, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.976337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688055 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084931, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.976337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688068 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1084967, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9841516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688075 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084953, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9810748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688104 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1084967, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9841516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688112 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084953, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9810748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688123 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084931, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.976337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688131 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084953, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9810748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688142 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1084967, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9841516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688150 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1084938, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9781513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.688158 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084957, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9817908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688183 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084957, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9817908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688190 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1084967, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9841516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688197 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084953, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9810748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688207 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1084967, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9841516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688219 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084944, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9795418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688226 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084957, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9817908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688232 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084944, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9795418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688258 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084957, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9817908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688267 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084957, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9817908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688273 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1084967, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9841516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688285 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084990, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9890404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688313 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084990, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9890404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688322 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084944, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9795418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688330 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084924, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.975292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688359 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084990, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9890404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688368 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084944, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9795418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688375 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084957, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9817908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688390 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084944, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9795418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688398 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085010, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9929786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688405 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084924, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.975292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688413 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084990, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9890404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688451 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084924, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.975292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688460 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084984, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.988454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688467 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084990, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9890404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688482 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085010, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9929786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688489 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084936, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9775593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688496 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084970, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9853122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.688503 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084944, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9795418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688538 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084924, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.975292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688547 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085010, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9929786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688554 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084990, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9890404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688568 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084984, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.988454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688583 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084924, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.975292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688591 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1084927, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9754798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688598 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084984, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.988454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688609 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085010, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9929786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688616 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084962, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9834921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688624 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084936, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9775593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688638 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084924, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.975292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688645 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084936, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9775593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688660 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084958, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9820025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688668 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084984, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.988454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688681 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1084927, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9754798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688689 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085010, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9929786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688700 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084936, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9775593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688710 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085007, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9929786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688717 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:01:13.688725 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084931, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.976337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.688732 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084962, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9834921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688739 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085010, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9929786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688749 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1084927, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9754798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688757 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1084927, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9754798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688768 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084984, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.988454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688779 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084958, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9820025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688786 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084936, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9775593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688793 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084962, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9834921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688800 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084984, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.988454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688810 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085007, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9929786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688817 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:13.688828 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084962, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9834921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688835 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1084927, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9754798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688847 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084936, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9775593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688854 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084958, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9820025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688862 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1084927, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9754798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688869 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085007, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9929786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688876 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:13.688888 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084962, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9834921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688901 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084953, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9810748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.688908 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084958, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9820025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688918 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084962, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9834921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688925 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085007, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9929786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688932 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:01:13.688939 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084958, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9820025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688946 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084958, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9820025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688957 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085007, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9929786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688968 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:13.688974 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085007, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9929786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-08 01:01:13.688981 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:01:13.688987 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1084967, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9841516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.688996 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084957, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9817908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.689003 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084944, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9795418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.689025 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084990, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9890404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.689031 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084924, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.975292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.689045 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085010, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9929786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.689052 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084984, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.988454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.689058 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084936, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9775593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.689067 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1084927, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9754798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.689074 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084962, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9834921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.689080 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084958, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9820025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.689086 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085007, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9929786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 01:01:13.689097 | orchestrator | 2026-04-08 01:01:13.689104 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-08 01:01:13.689111 | orchestrator | Wednesday 08 April 2026 00:59:03 +0000 (0:00:25.164) 0:00:51.916 ******* 2026-04-08 01:01:13.689117 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 01:01:13.689123 | orchestrator | 2026-04-08 01:01:13.689132 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-08 01:01:13.689139 | orchestrator | Wednesday 08 April 2026 00:59:04 +0000 (0:00:00.916) 0:00:52.833 ******* 2026-04-08 01:01:13.689145 | orchestrator | [WARNING]: Skipped 2026-04-08 01:01:13.689153 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 01:01:13.689160 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-08 01:01:13.689167 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 01:01:13.689174 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-08 01:01:13.689181 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 01:01:13.689188 | orchestrator | [WARNING]: Skipped 2026-04-08 01:01:13.689195 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 01:01:13.689201 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-08 01:01:13.689208 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 01:01:13.689215 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-08 01:01:13.689223 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-08 01:01:13.689229 | orchestrator | [WARNING]: Skipped 2026-04-08 01:01:13.689236 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 01:01:13.689243 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-08 01:01:13.689248 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 01:01:13.689254 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-08 01:01:13.689260 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-08 01:01:13.689266 | orchestrator | [WARNING]: Skipped 2026-04-08 01:01:13.689273 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 01:01:13.689280 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-08 01:01:13.689286 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 01:01:13.689293 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-08 01:01:13.689299 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 01:01:13.689306 | orchestrator | [WARNING]: Skipped 2026-04-08 01:01:13.689315 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 01:01:13.689322 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-08 01:01:13.689328 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 01:01:13.689334 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-08 01:01:13.689341 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-08 01:01:13.689347 | orchestrator | [WARNING]: Skipped 2026-04-08 01:01:13.689354 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 01:01:13.689361 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-08 01:01:13.689367 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 01:01:13.689374 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-08 01:01:13.689385 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-08 01:01:13.689392 | orchestrator | [WARNING]: Skipped 2026-04-08 01:01:13.689399 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 01:01:13.689406 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-08 01:01:13.689413 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 01:01:13.689420 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-08 01:01:13.689427 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-08 01:01:13.689434 | orchestrator | 2026-04-08 01:01:13.689440 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-08 01:01:13.689446 | orchestrator | Wednesday 08 April 2026 00:59:06 +0000 (0:00:02.057) 0:00:54.891 ******* 2026-04-08 01:01:13.689453 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-08 01:01:13.689459 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:13.689466 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-08 01:01:13.689473 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:13.689479 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-08 01:01:13.689486 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:13.689493 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-08 01:01:13.689500 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:01:13.689507 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-08 01:01:13.689514 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:01:13.689521 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-08 01:01:13.689529 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:01:13.689536 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-08 01:01:13.689542 | orchestrator | 2026-04-08 01:01:13.689549 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-08 01:01:13.689556 | orchestrator | Wednesday 08 April 2026 00:59:23 +0000 (0:00:16.803) 0:01:11.695 ******* 2026-04-08 01:01:13.689563 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-08 01:01:13.689575 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:13.689583 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-08 01:01:13.689590 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:13.689596 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-08 01:01:13.689603 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:13.689610 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-08 01:01:13.689617 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:01:13.689624 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-08 01:01:13.689631 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:01:13.689637 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-08 01:01:13.689644 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:01:13.689651 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-08 01:01:13.689658 | orchestrator | 2026-04-08 01:01:13.689664 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-08 01:01:13.689671 | orchestrator | Wednesday 08 April 2026 00:59:27 +0000 (0:00:03.759) 0:01:15.455 ******* 2026-04-08 01:01:13.689679 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-08 01:01:13.689690 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:13.689698 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-08 01:01:13.689705 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:13.689712 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-08 01:01:13.689719 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:01:13.689730 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-08 01:01:13.689737 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-08 01:01:13.689744 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:13.689751 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:01:13.689758 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-08 01:01:13.689764 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:01:13.689769 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-08 01:01:13.689775 | orchestrator | 2026-04-08 01:01:13.689782 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-08 01:01:13.689789 | orchestrator | Wednesday 08 April 2026 00:59:28 +0000 (0:00:01.491) 0:01:16.946 ******* 2026-04-08 01:01:13.689795 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 01:01:13.689802 | orchestrator | 2026-04-08 01:01:13.689808 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-08 01:01:13.689815 | orchestrator | Wednesday 08 April 2026 00:59:29 +0000 (0:00:00.750) 0:01:17.697 ******* 2026-04-08 01:01:13.689822 | orchestrator | skipping: [testbed-manager] 2026-04-08 01:01:13.689829 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:13.689836 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:13.689843 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:13.689851 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:01:13.689858 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:01:13.689865 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:01:13.689872 | orchestrator | 2026-04-08 01:01:13.689879 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-08 01:01:13.689886 | orchestrator | Wednesday 08 April 2026 00:59:30 +0000 (0:00:01.005) 0:01:18.702 ******* 2026-04-08 01:01:13.689893 | orchestrator | skipping: [testbed-manager] 2026-04-08 01:01:13.689900 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:01:13.689907 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:01:13.689914 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:01:13.689921 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:13.689927 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:01:13.689933 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:01:13.689940 | orchestrator | 2026-04-08 01:01:13.689947 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-08 01:01:13.689953 | orchestrator | Wednesday 08 April 2026 00:59:33 +0000 (0:00:02.567) 0:01:21.270 ******* 2026-04-08 01:01:13.689960 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-08 01:01:13.689967 | orchestrator | skipping: [testbed-manager] 2026-04-08 01:01:13.689974 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-08 01:01:13.689981 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:01:13.689988 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-08 01:01:13.689994 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:13.690005 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-08 01:01:13.690048 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:13.690061 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-08 01:01:13.690068 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:13.690075 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-08 01:01:13.690082 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:01:13.690089 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-08 01:01:13.690096 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:01:13.690104 | orchestrator | 2026-04-08 01:01:13.690110 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-08 01:01:13.690117 | orchestrator | Wednesday 08 April 2026 00:59:34 +0000 (0:00:01.551) 0:01:22.821 ******* 2026-04-08 01:01:13.690124 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-08 01:01:13.690131 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:13.690137 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-08 01:01:13.690144 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-08 01:01:13.690151 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:13.690158 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:13.690164 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-08 01:01:13.690171 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:01:13.690178 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-08 01:01:13.690185 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-08 01:01:13.690193 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:01:13.690200 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-08 01:01:13.690207 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:01:13.690214 | orchestrator | 2026-04-08 01:01:13.690225 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-08 01:01:13.690232 | orchestrator | Wednesday 08 April 2026 00:59:36 +0000 (0:00:01.682) 0:01:24.503 ******* 2026-04-08 01:01:13.690239 | orchestrator | [WARNING]: Skipped 2026-04-08 01:01:13.690247 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-08 01:01:13.690254 | orchestrator | due to this access issue: 2026-04-08 01:01:13.690261 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-08 01:01:13.690268 | orchestrator | not a directory 2026-04-08 01:01:13.690275 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 01:01:13.690282 | orchestrator | 2026-04-08 01:01:13.690289 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-08 01:01:13.690297 | orchestrator | Wednesday 08 April 2026 00:59:37 +0000 (0:00:01.227) 0:01:25.731 ******* 2026-04-08 01:01:13.690304 | orchestrator | skipping: [testbed-manager] 2026-04-08 01:01:13.690310 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:13.690317 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:13.690324 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:13.690331 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:01:13.690338 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:01:13.690345 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:01:13.690352 | orchestrator | 2026-04-08 01:01:13.690359 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-08 01:01:13.690371 | orchestrator | Wednesday 08 April 2026 00:59:38 +0000 (0:00:00.897) 0:01:26.628 ******* 2026-04-08 01:01:13.690378 | orchestrator | skipping: [testbed-manager] 2026-04-08 01:01:13.690385 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:13.690393 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:13.690400 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:13.690406 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:01:13.690412 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:01:13.690420 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:01:13.690426 | orchestrator | 2026-04-08 01:01:13.690434 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-08 01:01:13.690441 | orchestrator | Wednesday 08 April 2026 00:59:39 +0000 (0:00:00.736) 0:01:27.365 ******* 2026-04-08 01:01:13.690448 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-08 01:01:13.690461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.690469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.690477 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.690490 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.690498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.690509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.690517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.690523 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.690534 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.690542 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.690549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.690559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.690567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.690581 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-08 01:01:13.690591 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.690602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 01:01:13.690610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.690617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.690627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.690638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.690645 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.690651 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.690658 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.690667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.690675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.690681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.690691 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 01:01:13.690702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 01:01:13.690709 | orchestrator | 2026-04-08 01:01:13.690716 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-08 01:01:13.690723 | orchestrator | Wednesday 08 April 2026 00:59:44 +0000 (0:00:05.333) 0:01:32.699 ******* 2026-04-08 01:01:13.690730 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-08 01:01:13.690737 | orchestrator | skipping: [testbed-manager] 2026-04-08 01:01:13.690745 | orchestrator | 2026-04-08 01:01:13.690753 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-08 01:01:13.690759 | orchestrator | Wednesday 08 April 2026 00:59:45 +0000 (0:00:01.435) 0:01:34.134 ******* 2026-04-08 01:01:13.690766 | orchestrator | 2026-04-08 01:01:13.690773 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-08 01:01:13.690781 | orchestrator | Wednesday 08 April 2026 00:59:45 +0000 (0:00:00.061) 0:01:34.195 ******* 2026-04-08 01:01:13.690788 | orchestrator | 2026-04-08 01:01:13.690794 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-08 01:01:13.690801 | orchestrator | Wednesday 08 April 2026 00:59:46 +0000 (0:00:00.061) 0:01:34.257 ******* 2026-04-08 01:01:13.690808 | orchestrator | 2026-04-08 01:01:13.690816 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-08 01:01:13.690823 | orchestrator | Wednesday 08 April 2026 00:59:46 +0000 (0:00:00.067) 0:01:34.324 ******* 2026-04-08 01:01:13.690830 | orchestrator | 2026-04-08 01:01:13.690837 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-08 01:01:13.690845 | orchestrator | Wednesday 08 April 2026 00:59:46 +0000 (0:00:00.062) 0:01:34.387 ******* 2026-04-08 01:01:13.690852 | orchestrator | 2026-04-08 01:01:13.690859 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-08 01:01:13.690866 | orchestrator | Wednesday 08 April 2026 00:59:46 +0000 (0:00:00.058) 0:01:34.445 ******* 2026-04-08 01:01:13.690873 | orchestrator | 2026-04-08 01:01:13.690880 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-08 01:01:13.690887 | orchestrator | Wednesday 08 April 2026 00:59:46 +0000 (0:00:00.075) 0:01:34.521 ******* 2026-04-08 01:01:13.690894 | orchestrator | 2026-04-08 01:01:13.690901 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-08 01:01:13.690908 | orchestrator | Wednesday 08 April 2026 00:59:46 +0000 (0:00:00.085) 0:01:34.606 ******* 2026-04-08 01:01:13.690915 | orchestrator | changed: [testbed-manager] 2026-04-08 01:01:13.690921 | orchestrator | 2026-04-08 01:01:13.690928 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-08 01:01:13.690940 | orchestrator | Wednesday 08 April 2026 01:00:05 +0000 (0:00:19.008) 0:01:53.615 ******* 2026-04-08 01:01:13.690948 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:01:13.690955 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:01:13.690962 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:13.690969 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:01:13.690976 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:01:13.690983 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:01:13.690994 | orchestrator | changed: [testbed-manager] 2026-04-08 01:01:13.691000 | orchestrator | 2026-04-08 01:01:13.691026 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-08 01:01:13.691034 | orchestrator | Wednesday 08 April 2026 01:00:17 +0000 (0:00:12.487) 0:02:06.102 ******* 2026-04-08 01:01:13.691041 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:01:13.691047 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:01:13.691053 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:13.691059 | orchestrator | 2026-04-08 01:01:13.691065 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-08 01:01:13.691072 | orchestrator | Wednesday 08 April 2026 01:00:22 +0000 (0:00:04.999) 0:02:11.101 ******* 2026-04-08 01:01:13.691079 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:01:13.691085 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:01:13.691092 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:13.691099 | orchestrator | 2026-04-08 01:01:13.691106 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-08 01:01:13.691112 | orchestrator | Wednesday 08 April 2026 01:00:28 +0000 (0:00:05.224) 0:02:16.325 ******* 2026-04-08 01:01:13.691119 | orchestrator | changed: [testbed-manager] 2026-04-08 01:01:13.691125 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:13.691132 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:01:13.691138 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:01:13.691144 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:01:13.691150 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:01:13.691156 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:01:13.691162 | orchestrator | 2026-04-08 01:01:13.691169 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-08 01:01:13.691176 | orchestrator | Wednesday 08 April 2026 01:00:41 +0000 (0:00:13.858) 0:02:30.184 ******* 2026-04-08 01:01:13.691182 | orchestrator | changed: [testbed-manager] 2026-04-08 01:01:13.691189 | orchestrator | 2026-04-08 01:01:13.691195 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-08 01:01:13.691202 | orchestrator | Wednesday 08 April 2026 01:00:48 +0000 (0:00:06.368) 0:02:36.553 ******* 2026-04-08 01:01:13.691212 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:13.691219 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:01:13.691225 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:01:13.691231 | orchestrator | 2026-04-08 01:01:13.691237 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-08 01:01:13.691243 | orchestrator | Wednesday 08 April 2026 01:00:55 +0000 (0:00:06.854) 0:02:43.408 ******* 2026-04-08 01:01:13.691250 | orchestrator | changed: [testbed-manager] 2026-04-08 01:01:13.691256 | orchestrator | 2026-04-08 01:01:13.691263 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-08 01:01:13.691269 | orchestrator | Wednesday 08 April 2026 01:01:00 +0000 (0:00:05.039) 0:02:48.447 ******* 2026-04-08 01:01:13.691276 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:01:13.691283 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:01:13.691290 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:01:13.691297 | orchestrator | 2026-04-08 01:01:13.691303 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:01:13.691310 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-08 01:01:13.691317 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-08 01:01:13.691324 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-08 01:01:13.691331 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-08 01:01:13.691343 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-08 01:01:13.691350 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-08 01:01:13.691357 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-08 01:01:13.691363 | orchestrator | 2026-04-08 01:01:13.691370 | orchestrator | 2026-04-08 01:01:13.691377 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:01:13.691383 | orchestrator | Wednesday 08 April 2026 01:01:11 +0000 (0:00:11.745) 0:03:00.193 ******* 2026-04-08 01:01:13.691390 | orchestrator | =============================================================================== 2026-04-08 01:01:13.691397 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.16s 2026-04-08 01:01:13.691404 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 19.01s 2026-04-08 01:01:13.691410 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.80s 2026-04-08 01:01:13.691417 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.86s 2026-04-08 01:01:13.691424 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.49s 2026-04-08 01:01:13.691437 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.75s 2026-04-08 01:01:13.691444 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.85s 2026-04-08 01:01:13.691450 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.62s 2026-04-08 01:01:13.691456 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.37s 2026-04-08 01:01:13.691463 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.19s 2026-04-08 01:01:13.691469 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.33s 2026-04-08 01:01:13.691475 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.22s 2026-04-08 01:01:13.691482 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.04s 2026-04-08 01:01:13.691488 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.00s 2026-04-08 01:01:13.691494 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.42s 2026-04-08 01:01:13.691501 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.76s 2026-04-08 01:01:13.691508 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.57s 2026-04-08 01:01:13.691515 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.23s 2026-04-08 01:01:13.691522 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.06s 2026-04-08 01:01:13.691528 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 1.68s 2026-04-08 01:01:13.691534 | orchestrator | 2026-04-08 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:16.740445 | orchestrator | 2026-04-08 01:01:16 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:16.742347 | orchestrator | 2026-04-08 01:01:16 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:16.743129 | orchestrator | 2026-04-08 01:01:16 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:16.744395 | orchestrator | 2026-04-08 01:01:16 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state STARTED 2026-04-08 01:01:16.744436 | orchestrator | 2026-04-08 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:19.787256 | orchestrator | 2026-04-08 01:01:19 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:19.789380 | orchestrator | 2026-04-08 01:01:19 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:19.791438 | orchestrator | 2026-04-08 01:01:19 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:19.795359 | orchestrator | 2026-04-08 01:01:19 | INFO  | Task 28c7f861-22ef-477f-a801-362b84502a1b is in state SUCCESS 2026-04-08 01:01:19.796106 | orchestrator | 2026-04-08 01:01:19.798940 | orchestrator | 2026-04-08 01:01:19.799019 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 01:01:19.799030 | orchestrator | 2026-04-08 01:01:19.799037 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 01:01:19.799043 | orchestrator | Wednesday 08 April 2026 00:58:19 +0000 (0:00:00.330) 0:00:00.330 ******* 2026-04-08 01:01:19.799047 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:01:19.799052 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:01:19.799056 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:01:19.799061 | orchestrator | 2026-04-08 01:01:19.799065 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 01:01:19.799069 | orchestrator | Wednesday 08 April 2026 00:58:19 +0000 (0:00:00.334) 0:00:00.664 ******* 2026-04-08 01:01:19.799073 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-08 01:01:19.799078 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-08 01:01:19.799082 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-08 01:01:19.799086 | orchestrator | 2026-04-08 01:01:19.799089 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-08 01:01:19.799093 | orchestrator | 2026-04-08 01:01:19.799098 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-08 01:01:19.799104 | orchestrator | Wednesday 08 April 2026 00:58:20 +0000 (0:00:00.385) 0:00:01.049 ******* 2026-04-08 01:01:19.799109 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:01:19.799116 | orchestrator | 2026-04-08 01:01:19.799121 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-08 01:01:19.799126 | orchestrator | Wednesday 08 April 2026 00:58:20 +0000 (0:00:00.642) 0:00:01.692 ******* 2026-04-08 01:01:19.799132 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-08 01:01:19.799137 | orchestrator | 2026-04-08 01:01:19.799143 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-08 01:01:19.799149 | orchestrator | Wednesday 08 April 2026 00:58:25 +0000 (0:00:04.534) 0:00:06.227 ******* 2026-04-08 01:01:19.799155 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-08 01:01:19.799162 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-08 01:01:19.799169 | orchestrator | 2026-04-08 01:01:19.799173 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-08 01:01:19.799176 | orchestrator | Wednesday 08 April 2026 00:58:32 +0000 (0:00:07.432) 0:00:13.659 ******* 2026-04-08 01:01:19.799180 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-08 01:01:19.799184 | orchestrator | 2026-04-08 01:01:19.799188 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-08 01:01:19.799191 | orchestrator | Wednesday 08 April 2026 00:58:36 +0000 (0:00:03.937) 0:00:17.597 ******* 2026-04-08 01:01:19.799196 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-08 01:01:19.799200 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-08 01:01:19.799204 | orchestrator | 2026-04-08 01:01:19.799208 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-08 01:01:19.799212 | orchestrator | Wednesday 08 April 2026 00:58:41 +0000 (0:00:04.524) 0:00:22.121 ******* 2026-04-08 01:01:19.799216 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-08 01:01:19.799236 | orchestrator | 2026-04-08 01:01:19.799240 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-08 01:01:19.799244 | orchestrator | Wednesday 08 April 2026 00:58:44 +0000 (0:00:03.564) 0:00:25.686 ******* 2026-04-08 01:01:19.799247 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-08 01:01:19.799251 | orchestrator | 2026-04-08 01:01:19.799255 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-08 01:01:19.799259 | orchestrator | Wednesday 08 April 2026 00:58:49 +0000 (0:00:04.364) 0:00:30.051 ******* 2026-04-08 01:01:19.799371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 01:01:19.799381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 01:01:19.799388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 01:01:19.799398 | orchestrator | 2026-04-08 01:01:19.799402 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-08 01:01:19.799406 | orchestrator | Wednesday 08 April 2026 00:58:53 +0000 (0:00:03.885) 0:00:33.936 ******* 2026-04-08 01:01:19.799411 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:01:19.799415 | orchestrator | 2026-04-08 01:01:19.799418 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-08 01:01:19.799426 | orchestrator | Wednesday 08 April 2026 00:58:53 +0000 (0:00:00.516) 0:00:34.453 ******* 2026-04-08 01:01:19.799430 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:01:19.799434 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:19.799438 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:01:19.799441 | orchestrator | 2026-04-08 01:01:19.799445 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-08 01:01:19.799449 | orchestrator | Wednesday 08 April 2026 00:58:56 +0000 (0:00:03.168) 0:00:37.622 ******* 2026-04-08 01:01:19.799453 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-08 01:01:19.799457 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-08 01:01:19.799460 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-08 01:01:19.799464 | orchestrator | 2026-04-08 01:01:19.799468 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-08 01:01:19.799472 | orchestrator | Wednesday 08 April 2026 00:58:58 +0000 (0:00:01.676) 0:00:39.298 ******* 2026-04-08 01:01:19.799476 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-08 01:01:19.799479 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-08 01:01:19.799483 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-08 01:01:19.799487 | orchestrator | 2026-04-08 01:01:19.799491 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-08 01:01:19.799494 | orchestrator | Wednesday 08 April 2026 00:58:59 +0000 (0:00:01.287) 0:00:40.586 ******* 2026-04-08 01:01:19.799502 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:01:19.799506 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:01:19.799509 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:01:19.799513 | orchestrator | 2026-04-08 01:01:19.799517 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-08 01:01:19.799521 | orchestrator | Wednesday 08 April 2026 00:59:00 +0000 (0:00:00.638) 0:00:41.224 ******* 2026-04-08 01:01:19.799524 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:19.799528 | orchestrator | 2026-04-08 01:01:19.799532 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-08 01:01:19.799536 | orchestrator | Wednesday 08 April 2026 00:59:00 +0000 (0:00:00.155) 0:00:41.380 ******* 2026-04-08 01:01:19.799539 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:19.799543 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:19.799547 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:19.799551 | orchestrator | 2026-04-08 01:01:19.799555 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-08 01:01:19.799558 | orchestrator | Wednesday 08 April 2026 00:59:00 +0000 (0:00:00.324) 0:00:41.704 ******* 2026-04-08 01:01:19.799562 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:01:19.799566 | orchestrator | 2026-04-08 01:01:19.799570 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-08 01:01:19.799574 | orchestrator | Wednesday 08 April 2026 00:59:01 +0000 (0:00:00.703) 0:00:42.408 ******* 2026-04-08 01:01:19.799581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 01:01:19.799589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 01:01:19.799597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 01:01:19.799601 | orchestrator | 2026-04-08 01:01:19.799605 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-08 01:01:19.799612 | orchestrator | Wednesday 08 April 2026 00:59:06 +0000 (0:00:04.547) 0:00:46.956 ******* 2026-04-08 01:01:19.799621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-08 01:01:19.799629 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:19.799633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-08 01:01:19.799637 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:19.799648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-08 01:01:19.799652 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:19.799659 | orchestrator | 2026-04-08 01:01:19.799663 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-08 01:01:19.799667 | orchestrator | Wednesday 08 April 2026 00:59:10 +0000 (0:00:04.023) 0:00:50.979 ******* 2026-04-08 01:01:19.799671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-08 01:01:19.799675 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:19.799681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-08 01:01:19.799685 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:19.799693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-08 01:01:19.799700 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:19.799704 | orchestrator | 2026-04-08 01:01:19.799708 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-08 01:01:19.799711 | orchestrator | Wednesday 08 April 2026 00:59:13 +0000 (0:00:03.568) 0:00:54.548 ******* 2026-04-08 01:01:19.799715 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:19.799719 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:19.799723 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:19.799727 | orchestrator | 2026-04-08 01:01:19.799731 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-08 01:01:19.799734 | orchestrator | Wednesday 08 April 2026 00:59:17 +0000 (0:00:03.841) 0:00:58.389 ******* 2026-04-08 01:01:19.799745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 01:01:19.799752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 01:01:19.799760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 01:01:19.799764 | orchestrator | 2026-04-08 01:01:19.799768 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-08 01:01:19.799772 | orchestrator | Wednesday 08 April 2026 00:59:21 +0000 (0:00:04.320) 0:01:02.710 ******* 2026-04-08 01:01:19.799776 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:19.799779 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:01:19.799783 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:01:19.799787 | orchestrator | 2026-04-08 01:01:19.799791 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-08 01:01:19.799797 | orchestrator | Wednesday 08 April 2026 00:59:27 +0000 (0:00:05.907) 0:01:08.618 ******* 2026-04-08 01:01:19.799801 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:19.799804 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:19.799808 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:19.799815 | orchestrator | 2026-04-08 01:01:19.799818 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-08 01:01:19.799822 | orchestrator | Wednesday 08 April 2026 00:59:32 +0000 (0:00:04.243) 0:01:12.861 ******* 2026-04-08 01:01:19.799826 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:19.799830 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:19.799834 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:19.799837 | orchestrator | 2026-04-08 01:01:19.799841 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-08 01:01:19.799845 | orchestrator | Wednesday 08 April 2026 00:59:36 +0000 (0:00:04.006) 0:01:16.868 ******* 2026-04-08 01:01:19.799849 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:19.799852 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:19.799858 | orchestrator | skipping: [testbed-2026-04-08 01:01:19 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:01:19.799863 | orchestrator | 2026-04-08 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:19.799867 | orchestrator | node-1] 2026-04-08 01:01:19.799871 | orchestrator | 2026-04-08 01:01:19.799875 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-08 01:01:19.799879 | orchestrator | Wednesday 08 April 2026 00:59:39 +0000 (0:00:03.608) 0:01:20.476 ******* 2026-04-08 01:01:19.799883 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:19.799886 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:19.799890 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:19.799894 | orchestrator | 2026-04-08 01:01:19.799898 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-08 01:01:19.799901 | orchestrator | Wednesday 08 April 2026 00:59:44 +0000 (0:00:05.000) 0:01:25.477 ******* 2026-04-08 01:01:19.799905 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:19.799909 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:19.799913 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:19.799916 | orchestrator | 2026-04-08 01:01:19.799920 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-08 01:01:19.799924 | orchestrator | Wednesday 08 April 2026 00:59:45 +0000 (0:00:00.360) 0:01:25.838 ******* 2026-04-08 01:01:19.799928 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-08 01:01:19.799932 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:19.799935 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-08 01:01:19.799939 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:19.799943 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-08 01:01:19.799947 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:19.799951 | orchestrator | 2026-04-08 01:01:19.799954 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-08 01:01:19.799958 | orchestrator | Wednesday 08 April 2026 00:59:48 +0000 (0:00:03.826) 0:01:29.664 ******* 2026-04-08 01:01:19.799962 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:19.799966 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:19.799970 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:19.799973 | orchestrator | 2026-04-08 01:01:19.799977 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-08 01:01:19.799981 | orchestrator | Wednesday 08 April 2026 00:59:53 +0000 (0:00:04.257) 0:01:33.921 ******* 2026-04-08 01:01:19.799985 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:19.799988 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:19.799992 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:19.799996 | orchestrator | 2026-04-08 01:01:19.800000 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-08 01:01:19.800041 | orchestrator | Wednesday 08 April 2026 00:59:57 +0000 (0:00:04.059) 0:01:37.981 ******* 2026-04-08 01:01:19.800054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 01:01:19.800063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 01:01:19.800069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 01:01:19.800077 | orchestrator | 2026-04-08 01:01:19.800082 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-08 01:01:19.800087 | orchestrator | Wednesday 08 April 2026 01:00:01 +0000 (0:00:04.412) 0:01:42.393 ******* 2026-04-08 01:01:19.800091 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:19.800095 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:19.800101 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:19.800106 | orchestrator | 2026-04-08 01:01:19.800112 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-08 01:01:19.800121 | orchestrator | Wednesday 08 April 2026 01:00:01 +0000 (0:00:00.252) 0:01:42.646 ******* 2026-04-08 01:01:19.800126 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:19.800132 | orchestrator | 2026-04-08 01:01:19.800137 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-08 01:01:19.800143 | orchestrator | Wednesday 08 April 2026 01:00:04 +0000 (0:00:02.631) 0:01:45.277 ******* 2026-04-08 01:01:19.800150 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:19.800156 | orchestrator | 2026-04-08 01:01:19.800161 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-08 01:01:19.800168 | orchestrator | Wednesday 08 April 2026 01:00:07 +0000 (0:00:02.864) 0:01:48.143 ******* 2026-04-08 01:01:19.800174 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:19.800178 | orchestrator | 2026-04-08 01:01:19.800182 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-08 01:01:19.800186 | orchestrator | Wednesday 08 April 2026 01:00:09 +0000 (0:00:02.528) 0:01:50.671 ******* 2026-04-08 01:01:19.800192 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:19.800196 | orchestrator | 2026-04-08 01:01:19.800199 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-08 01:01:19.800203 | orchestrator | Wednesday 08 April 2026 01:00:40 +0000 (0:00:30.215) 0:02:20.887 ******* 2026-04-08 01:01:19.800207 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:19.800211 | orchestrator | 2026-04-08 01:01:19.800215 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-08 01:01:19.800218 | orchestrator | Wednesday 08 April 2026 01:00:42 +0000 (0:00:02.268) 0:02:23.156 ******* 2026-04-08 01:01:19.800222 | orchestrator | 2026-04-08 01:01:19.800226 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-08 01:01:19.800230 | orchestrator | Wednesday 08 April 2026 01:00:42 +0000 (0:00:00.059) 0:02:23.216 ******* 2026-04-08 01:01:19.800234 | orchestrator | 2026-04-08 01:01:19.800237 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-08 01:01:19.800241 | orchestrator | Wednesday 08 April 2026 01:00:42 +0000 (0:00:00.055) 0:02:23.272 ******* 2026-04-08 01:01:19.800245 | orchestrator | 2026-04-08 01:01:19.800249 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-08 01:01:19.800252 | orchestrator | Wednesday 08 April 2026 01:00:42 +0000 (0:00:00.060) 0:02:23.333 ******* 2026-04-08 01:01:19.800256 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:19.800263 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:01:19.800267 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:01:19.800271 | orchestrator | 2026-04-08 01:01:19.800275 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:01:19.800278 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-04-08 01:01:19.800283 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-08 01:01:19.800287 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-08 01:01:19.800291 | orchestrator | 2026-04-08 01:01:19.800295 | orchestrator | 2026-04-08 01:01:19.800298 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:01:19.800302 | orchestrator | Wednesday 08 April 2026 01:01:17 +0000 (0:00:34.855) 0:02:58.188 ******* 2026-04-08 01:01:19.800306 | orchestrator | =============================================================================== 2026-04-08 01:01:19.800310 | orchestrator | glance : Restart glance-api container ---------------------------------- 34.86s 2026-04-08 01:01:19.800313 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.22s 2026-04-08 01:01:19.800317 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.43s 2026-04-08 01:01:19.800321 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.91s 2026-04-08 01:01:19.800325 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.00s 2026-04-08 01:01:19.800329 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.55s 2026-04-08 01:01:19.800333 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.54s 2026-04-08 01:01:19.800336 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.52s 2026-04-08 01:01:19.800340 | orchestrator | glance : Check glance containers ---------------------------------------- 4.41s 2026-04-08 01:01:19.800344 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.36s 2026-04-08 01:01:19.800348 | orchestrator | glance : Copying over config.json files for services -------------------- 4.32s 2026-04-08 01:01:19.800351 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.26s 2026-04-08 01:01:19.800355 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.24s 2026-04-08 01:01:19.800359 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 4.06s 2026-04-08 01:01:19.800363 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.02s 2026-04-08 01:01:19.800367 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.01s 2026-04-08 01:01:19.800370 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.94s 2026-04-08 01:01:19.800374 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.88s 2026-04-08 01:01:19.800378 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.84s 2026-04-08 01:01:19.800382 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.83s 2026-04-08 01:01:22.878548 | orchestrator | 2026-04-08 01:01:22 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:22.881336 | orchestrator | 2026-04-08 01:01:22 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:22.883223 | orchestrator | 2026-04-08 01:01:22 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:22.885926 | orchestrator | 2026-04-08 01:01:22 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:01:22.886157 | orchestrator | 2026-04-08 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:26.006840 | orchestrator | 2026-04-08 01:01:26 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:26.009487 | orchestrator | 2026-04-08 01:01:26 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:26.012428 | orchestrator | 2026-04-08 01:01:26 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:26.014567 | orchestrator | 2026-04-08 01:01:26 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:01:26.014601 | orchestrator | 2026-04-08 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:29.066328 | orchestrator | 2026-04-08 01:01:29 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:29.068489 | orchestrator | 2026-04-08 01:01:29 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:29.069809 | orchestrator | 2026-04-08 01:01:29 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:29.071272 | orchestrator | 2026-04-08 01:01:29 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:01:29.071299 | orchestrator | 2026-04-08 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:32.119558 | orchestrator | 2026-04-08 01:01:32 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:32.122187 | orchestrator | 2026-04-08 01:01:32 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:32.124485 | orchestrator | 2026-04-08 01:01:32 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:32.127025 | orchestrator | 2026-04-08 01:01:32 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:01:32.127100 | orchestrator | 2026-04-08 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:35.177853 | orchestrator | 2026-04-08 01:01:35 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:35.179035 | orchestrator | 2026-04-08 01:01:35 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:35.181217 | orchestrator | 2026-04-08 01:01:35 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:35.183012 | orchestrator | 2026-04-08 01:01:35 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:01:35.183271 | orchestrator | 2026-04-08 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:38.233034 | orchestrator | 2026-04-08 01:01:38 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:38.234423 | orchestrator | 2026-04-08 01:01:38 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:38.236523 | orchestrator | 2026-04-08 01:01:38 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:38.237867 | orchestrator | 2026-04-08 01:01:38 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:01:38.237917 | orchestrator | 2026-04-08 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:41.299647 | orchestrator | 2026-04-08 01:01:41 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:41.301165 | orchestrator | 2026-04-08 01:01:41 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:41.302531 | orchestrator | 2026-04-08 01:01:41 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:41.303679 | orchestrator | 2026-04-08 01:01:41 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:01:41.303724 | orchestrator | 2026-04-08 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:44.349326 | orchestrator | 2026-04-08 01:01:44 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:44.349775 | orchestrator | 2026-04-08 01:01:44 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:44.350607 | orchestrator | 2026-04-08 01:01:44 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:44.351655 | orchestrator | 2026-04-08 01:01:44 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:01:44.351688 | orchestrator | 2026-04-08 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:47.402422 | orchestrator | 2026-04-08 01:01:47 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state STARTED 2026-04-08 01:01:47.404134 | orchestrator | 2026-04-08 01:01:47 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:47.406122 | orchestrator | 2026-04-08 01:01:47 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:47.408407 | orchestrator | 2026-04-08 01:01:47 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:01:47.408473 | orchestrator | 2026-04-08 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:50.453886 | orchestrator | 2026-04-08 01:01:50 | INFO  | Task b0182c28-75aa-428e-b04b-65c03fd407ba is in state SUCCESS 2026-04-08 01:01:50.455972 | orchestrator | 2026-04-08 01:01:50.456132 | orchestrator | 2026-04-08 01:01:50.456145 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 01:01:50.456153 | orchestrator | 2026-04-08 01:01:50.456160 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 01:01:50.456167 | orchestrator | Wednesday 08 April 2026 00:58:33 +0000 (0:00:00.251) 0:00:00.251 ******* 2026-04-08 01:01:50.456174 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:01:50.456182 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:01:50.456189 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:01:50.456195 | orchestrator | 2026-04-08 01:01:50.456202 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 01:01:50.456208 | orchestrator | Wednesday 08 April 2026 00:58:34 +0000 (0:00:00.230) 0:00:00.482 ******* 2026-04-08 01:01:50.456215 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-08 01:01:50.456223 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-08 01:01:50.456230 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-08 01:01:50.456236 | orchestrator | 2026-04-08 01:01:50.456243 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-08 01:01:50.456250 | orchestrator | 2026-04-08 01:01:50.456257 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-08 01:01:50.456265 | orchestrator | Wednesday 08 April 2026 00:58:34 +0000 (0:00:00.265) 0:00:00.748 ******* 2026-04-08 01:01:50.456272 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:01:50.456280 | orchestrator | 2026-04-08 01:01:50.456287 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-08 01:01:50.456294 | orchestrator | Wednesday 08 April 2026 00:58:34 +0000 (0:00:00.580) 0:00:01.328 ******* 2026-04-08 01:01:50.456301 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-08 01:01:50.456308 | orchestrator | 2026-04-08 01:01:50.456316 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-08 01:01:50.456324 | orchestrator | Wednesday 08 April 2026 00:58:39 +0000 (0:00:04.373) 0:00:05.701 ******* 2026-04-08 01:01:50.456355 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-08 01:01:50.456362 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-08 01:01:50.456369 | orchestrator | 2026-04-08 01:01:50.456375 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-08 01:01:50.456383 | orchestrator | Wednesday 08 April 2026 00:58:47 +0000 (0:00:07.854) 0:00:13.555 ******* 2026-04-08 01:01:50.456390 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-08 01:01:50.456917 | orchestrator | 2026-04-08 01:01:50.456942 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-08 01:01:50.456947 | orchestrator | Wednesday 08 April 2026 00:58:50 +0000 (0:00:03.749) 0:00:17.308 ******* 2026-04-08 01:01:50.456951 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-08 01:01:50.456956 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-08 01:01:50.456960 | orchestrator | 2026-04-08 01:01:50.456964 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-08 01:01:50.456968 | orchestrator | Wednesday 08 April 2026 00:58:55 +0000 (0:00:04.285) 0:00:21.594 ******* 2026-04-08 01:01:50.456972 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-08 01:01:50.456976 | orchestrator | 2026-04-08 01:01:50.456980 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-08 01:01:50.457024 | orchestrator | Wednesday 08 April 2026 00:58:58 +0000 (0:00:03.499) 0:00:25.093 ******* 2026-04-08 01:01:50.457029 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-08 01:01:50.457034 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-08 01:01:50.457040 | orchestrator | 2026-04-08 01:01:50.457046 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-08 01:01:50.457051 | orchestrator | Wednesday 08 April 2026 00:59:07 +0000 (0:00:08.927) 0:00:34.021 ******* 2026-04-08 01:01:50.457079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 01:01:50.457128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.457136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 01:01:50.457154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.457161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.457172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 01:01:50.457179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.457205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.457219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.457226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.457233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.457244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.457263 | orchestrator | 2026-04-08 01:01:50.457270 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-08 01:01:50.457277 | orchestrator | Wednesday 08 April 2026 00:59:11 +0000 (0:00:03.557) 0:00:37.578 ******* 2026-04-08 01:01:50.457283 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:50.457289 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:50.457296 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:50.457302 | orchestrator | 2026-04-08 01:01:50.457308 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-08 01:01:50.457314 | orchestrator | Wednesday 08 April 2026 00:59:11 +0000 (0:00:00.233) 0:00:37.811 ******* 2026-04-08 01:01:50.457320 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:01:50.457326 | orchestrator | 2026-04-08 01:01:50.457331 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-08 01:01:50.457355 | orchestrator | Wednesday 08 April 2026 00:59:12 +0000 (0:00:00.708) 0:00:38.520 ******* 2026-04-08 01:01:50.457368 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-08 01:01:50.457374 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-08 01:01:50.457381 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-08 01:01:50.457387 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-08 01:01:50.457393 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-08 01:01:50.457400 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-08 01:01:50.457406 | orchestrator | 2026-04-08 01:01:50.457412 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-08 01:01:50.457418 | orchestrator | Wednesday 08 April 2026 00:59:14 +0000 (0:00:02.133) 0:00:40.653 ******* 2026-04-08 01:01:50.457426 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-08 01:01:50.457434 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-08 01:01:50.457444 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-08 01:01:50.457451 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-08 01:01:50.457482 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-08 01:01:50.457490 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-08 01:01:50.457497 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-08 01:01:50.457505 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-08 01:01:50.457516 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-08 01:01:50.457544 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-08 01:01:50.457554 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-08 01:01:50.457561 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-08 01:01:50.457567 | orchestrator | 2026-04-08 01:01:50.457573 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-08 01:01:50.457581 | orchestrator | Wednesday 08 April 2026 00:59:18 +0000 (0:00:03.938) 0:00:44.591 ******* 2026-04-08 01:01:50.457587 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-08 01:01:50.457594 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-08 01:01:50.457611 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-08 01:01:50.457618 | orchestrator | 2026-04-08 01:01:50.457624 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-08 01:01:50.457630 | orchestrator | Wednesday 08 April 2026 00:59:20 +0000 (0:00:01.875) 0:00:46.467 ******* 2026-04-08 01:01:50.457636 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-08 01:01:50.457642 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-08 01:01:50.457648 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-08 01:01:50.457655 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-08 01:01:50.457661 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-08 01:01:50.457668 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-08 01:01:50.457674 | orchestrator | 2026-04-08 01:01:50.457681 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-08 01:01:50.457687 | orchestrator | Wednesday 08 April 2026 00:59:22 +0000 (0:00:02.951) 0:00:49.418 ******* 2026-04-08 01:01:50.457700 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-08 01:01:50.457708 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-08 01:01:50.457714 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-08 01:01:50.457721 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-08 01:01:50.457728 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-08 01:01:50.457734 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-08 01:01:50.457740 | orchestrator | 2026-04-08 01:01:50.457747 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-08 01:01:50.457753 | orchestrator | Wednesday 08 April 2026 00:59:24 +0000 (0:00:01.152) 0:00:50.570 ******* 2026-04-08 01:01:50.457759 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:50.457766 | orchestrator | 2026-04-08 01:01:50.457772 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-08 01:01:50.457778 | orchestrator | Wednesday 08 April 2026 00:59:24 +0000 (0:00:00.577) 0:00:51.148 ******* 2026-04-08 01:01:50.457785 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:50.457791 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:50.457797 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:50.457803 | orchestrator | 2026-04-08 01:01:50.457809 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-08 01:01:50.457815 | orchestrator | Wednesday 08 April 2026 00:59:25 +0000 (0:00:00.420) 0:00:51.569 ******* 2026-04-08 01:01:50.457821 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:01:50.457849 | orchestrator | 2026-04-08 01:01:50.457856 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-08 01:01:50.457863 | orchestrator | Wednesday 08 April 2026 00:59:25 +0000 (0:00:00.481) 0:00:52.050 ******* 2026-04-08 01:01:50.457869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 01:01:50.457906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 01:01:50.457915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 01:01:50.457935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458186 | orchestrator | 2026-04-08 01:01:50.458192 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-08 01:01:50.458199 | orchestrator | Wednesday 08 April 2026 00:59:29 +0000 (0:00:04.371) 0:00:56.424 ******* 2026-04-08 01:01:50.458206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-08 01:01:50.458212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458241 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:50.458253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-08 01:01:50.458260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458286 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:50.458297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-08 01:01:50.458307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458332 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:50.458338 | orchestrator | 2026-04-08 01:01:50.458344 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-08 01:01:50.458350 | orchestrator | Wednesday 08 April 2026 00:59:31 +0000 (0:00:01.229) 0:00:57.655 ******* 2026-04-08 01:01:50.458357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-08 01:01:50.458367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458394 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:50.458400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-08 01:01:50.458416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458439 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:50.458449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-08 01:01:50.458456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.458481 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:50.458487 | orchestrator | 2026-04-08 01:01:50.458492 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-08 01:01:50.458499 | orchestrator | Wednesday 08 April 2026 00:59:32 +0000 (0:00:01.080) 0:00:58.736 ******* 2026-04-08 01:01:50.458508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 01:01:50.458521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 01:01:50.458528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 01:01:50.458539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458622 | orchestrator | 2026-04-08 01:01:50.458628 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-08 01:01:50.458635 | orchestrator | Wednesday 08 April 2026 00:59:37 +0000 (0:00:04.735) 0:01:03.472 ******* 2026-04-08 01:01:50.458641 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-08 01:01:50.458648 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-08 01:01:50.458654 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-08 01:01:50.458660 | orchestrator | 2026-04-08 01:01:50.458666 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-08 01:01:50.458673 | orchestrator | Wednesday 08 April 2026 00:59:39 +0000 (0:00:02.436) 0:01:05.909 ******* 2026-04-08 01:01:50.458685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 01:01:50.458698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 01:01:50.458705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 01:01:50.458712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.458914 | orchestrator | 2026-04-08 01:01:50.458920 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-08 01:01:50.458924 | orchestrator | Wednesday 08 April 2026 00:59:53 +0000 (0:00:14.448) 0:01:20.357 ******* 2026-04-08 01:01:50.458929 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:50.458934 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:01:50.458940 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:01:50.458946 | orchestrator | 2026-04-08 01:01:50.458952 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-08 01:01:50.458957 | orchestrator | Wednesday 08 April 2026 00:59:56 +0000 (0:00:02.148) 0:01:22.506 ******* 2026-04-08 01:01:50.458963 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:50.458970 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:01:50.458976 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:01:50.459023 | orchestrator | 2026-04-08 01:01:50.459031 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-08 01:01:50.459037 | orchestrator | Wednesday 08 April 2026 00:59:57 +0000 (0:00:01.830) 0:01:24.337 ******* 2026-04-08 01:01:50.459044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-08 01:01:50.459050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.459061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.459068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.459138 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:50.459152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-08 01:01:50.459156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.459160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.459164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.459168 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:50.459176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-08 01:01:50.459184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.459191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.459196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 01:01:50.459200 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:50.459203 | orchestrator | 2026-04-08 01:01:50.459207 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-08 01:01:50.459211 | orchestrator | Wednesday 08 April 2026 00:59:59 +0000 (0:00:01.365) 0:01:25.702 ******* 2026-04-08 01:01:50.459215 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:50.459219 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:50.459223 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:50.459227 | orchestrator | 2026-04-08 01:01:50.459231 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-08 01:01:50.459235 | orchestrator | Wednesday 08 April 2026 00:59:59 +0000 (0:00:00.377) 0:01:26.080 ******* 2026-04-08 01:01:50.459239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 01:01:50.459246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 01:01:50.459259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-08 01:01:50.459263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.459269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.459276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.459281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.459299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.459315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.459321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.459327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.459333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-08 01:01:50.459339 | orchestrator | 2026-04-08 01:01:50.459345 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-08 01:01:50.459351 | orchestrator | Wednesday 08 April 2026 01:00:03 +0000 (0:00:03.901) 0:01:29.981 ******* 2026-04-08 01:01:50.459382 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:50.459389 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:01:50.459395 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:01:50.459409 | orchestrator | 2026-04-08 01:01:50.459414 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-08 01:01:50.459418 | orchestrator | Wednesday 08 April 2026 01:00:03 +0000 (0:00:00.246) 0:01:30.228 ******* 2026-04-08 01:01:50.459422 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:50.459425 | orchestrator | 2026-04-08 01:01:50.459430 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-08 01:01:50.459438 | orchestrator | Wednesday 08 April 2026 01:00:06 +0000 (0:00:02.398) 0:01:32.626 ******* 2026-04-08 01:01:50.459441 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:50.459445 | orchestrator | 2026-04-08 01:01:50.459449 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-08 01:01:50.459453 | orchestrator | Wednesday 08 April 2026 01:00:08 +0000 (0:00:02.579) 0:01:35.207 ******* 2026-04-08 01:01:50.459456 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:50.459460 | orchestrator | 2026-04-08 01:01:50.459471 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-08 01:01:50.459475 | orchestrator | Wednesday 08 April 2026 01:00:31 +0000 (0:00:22.894) 0:01:58.102 ******* 2026-04-08 01:01:50.459479 | orchestrator | 2026-04-08 01:01:50.459483 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-08 01:01:50.459486 | orchestrator | Wednesday 08 April 2026 01:00:31 +0000 (0:00:00.082) 0:01:58.185 ******* 2026-04-08 01:01:50.459490 | orchestrator | 2026-04-08 01:01:50.459495 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-08 01:01:50.459501 | orchestrator | Wednesday 08 April 2026 01:00:31 +0000 (0:00:00.093) 0:01:58.279 ******* 2026-04-08 01:01:50.459510 | orchestrator | 2026-04-08 01:01:50.459519 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-08 01:01:50.459525 | orchestrator | Wednesday 08 April 2026 01:00:31 +0000 (0:00:00.109) 0:01:58.388 ******* 2026-04-08 01:01:50.459531 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:50.459537 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:01:50.459544 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:01:50.459550 | orchestrator | 2026-04-08 01:01:50.459556 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-08 01:01:50.459567 | orchestrator | Wednesday 08 April 2026 01:00:58 +0000 (0:00:26.603) 0:02:24.992 ******* 2026-04-08 01:01:50.459574 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:50.459579 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:01:50.459585 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:01:50.459590 | orchestrator | 2026-04-08 01:01:50.459596 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-08 01:01:50.459602 | orchestrator | Wednesday 08 April 2026 01:01:10 +0000 (0:00:12.012) 0:02:37.005 ******* 2026-04-08 01:01:50.459607 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:50.459613 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:01:50.459619 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:01:50.459624 | orchestrator | 2026-04-08 01:01:50.459630 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-08 01:01:50.459636 | orchestrator | Wednesday 08 April 2026 01:01:39 +0000 (0:00:28.529) 0:03:05.535 ******* 2026-04-08 01:01:50.459642 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:01:50.459647 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:01:50.459653 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:01:50.459659 | orchestrator | 2026-04-08 01:01:50.459664 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-08 01:01:50.459671 | orchestrator | Wednesday 08 April 2026 01:01:49 +0000 (0:00:10.239) 0:03:15.774 ******* 2026-04-08 01:01:50.459677 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:01:50.459690 | orchestrator | 2026-04-08 01:01:50.459697 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:01:50.459713 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-08 01:01:50.459721 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 01:01:50.459727 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 01:01:50.459732 | orchestrator | 2026-04-08 01:01:50.459738 | orchestrator | 2026-04-08 01:01:50.459744 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:01:50.459751 | orchestrator | Wednesday 08 April 2026 01:01:49 +0000 (0:00:00.249) 0:03:16.024 ******* 2026-04-08 01:01:50.459757 | orchestrator | =============================================================================== 2026-04-08 01:01:50.459762 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 28.53s 2026-04-08 01:01:50.459769 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 26.60s 2026-04-08 01:01:50.459776 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 22.89s 2026-04-08 01:01:50.459783 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 14.45s 2026-04-08 01:01:50.459789 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.01s 2026-04-08 01:01:50.459795 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.24s 2026-04-08 01:01:50.459802 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.93s 2026-04-08 01:01:50.459809 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.86s 2026-04-08 01:01:50.459816 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.74s 2026-04-08 01:01:50.459823 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.37s 2026-04-08 01:01:50.459829 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.37s 2026-04-08 01:01:50.459834 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.29s 2026-04-08 01:01:50.459840 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.94s 2026-04-08 01:01:50.459846 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.90s 2026-04-08 01:01:50.459852 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.75s 2026-04-08 01:01:50.459863 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.56s 2026-04-08 01:01:50.459869 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.50s 2026-04-08 01:01:50.459875 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.95s 2026-04-08 01:01:50.459881 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.58s 2026-04-08 01:01:50.459887 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.44s 2026-04-08 01:01:50.459894 | orchestrator | 2026-04-08 01:01:50 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:50.459900 | orchestrator | 2026-04-08 01:01:50 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:50.459905 | orchestrator | 2026-04-08 01:01:50 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:01:50.459912 | orchestrator | 2026-04-08 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:53.506783 | orchestrator | 2026-04-08 01:01:53 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:01:53.508585 | orchestrator | 2026-04-08 01:01:53 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:53.511497 | orchestrator | 2026-04-08 01:01:53 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:53.512634 | orchestrator | 2026-04-08 01:01:53 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:01:53.512669 | orchestrator | 2026-04-08 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:56.546393 | orchestrator | 2026-04-08 01:01:56 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:01:56.548918 | orchestrator | 2026-04-08 01:01:56 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:56.551200 | orchestrator | 2026-04-08 01:01:56 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:56.554462 | orchestrator | 2026-04-08 01:01:56 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:01:56.554621 | orchestrator | 2026-04-08 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:59.595433 | orchestrator | 2026-04-08 01:01:59 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:01:59.595594 | orchestrator | 2026-04-08 01:01:59 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:01:59.595745 | orchestrator | 2026-04-08 01:01:59 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:01:59.596922 | orchestrator | 2026-04-08 01:01:59 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:01:59.597016 | orchestrator | 2026-04-08 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:02.629548 | orchestrator | 2026-04-08 01:02:02 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:02.629970 | orchestrator | 2026-04-08 01:02:02 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:02.631215 | orchestrator | 2026-04-08 01:02:02 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:02.631970 | orchestrator | 2026-04-08 01:02:02 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:02.632033 | orchestrator | 2026-04-08 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:05.671336 | orchestrator | 2026-04-08 01:02:05 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:05.672116 | orchestrator | 2026-04-08 01:02:05 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:05.672810 | orchestrator | 2026-04-08 01:02:05 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:05.674603 | orchestrator | 2026-04-08 01:02:05 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:05.674662 | orchestrator | 2026-04-08 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:08.734596 | orchestrator | 2026-04-08 01:02:08 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:08.734697 | orchestrator | 2026-04-08 01:02:08 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:08.734709 | orchestrator | 2026-04-08 01:02:08 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:08.734733 | orchestrator | 2026-04-08 01:02:08 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:08.734742 | orchestrator | 2026-04-08 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:11.748872 | orchestrator | 2026-04-08 01:02:11 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:11.750218 | orchestrator | 2026-04-08 01:02:11 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:11.751753 | orchestrator | 2026-04-08 01:02:11 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:11.752900 | orchestrator | 2026-04-08 01:02:11 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:11.752939 | orchestrator | 2026-04-08 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:14.778869 | orchestrator | 2026-04-08 01:02:14 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:14.779248 | orchestrator | 2026-04-08 01:02:14 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:14.780150 | orchestrator | 2026-04-08 01:02:14 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:14.782419 | orchestrator | 2026-04-08 01:02:14 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:14.782460 | orchestrator | 2026-04-08 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:17.825313 | orchestrator | 2026-04-08 01:02:17 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:17.827886 | orchestrator | 2026-04-08 01:02:17 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:17.829863 | orchestrator | 2026-04-08 01:02:17 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:17.832304 | orchestrator | 2026-04-08 01:02:17 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:17.832358 | orchestrator | 2026-04-08 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:20.871814 | orchestrator | 2026-04-08 01:02:20 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:20.873522 | orchestrator | 2026-04-08 01:02:20 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:20.875468 | orchestrator | 2026-04-08 01:02:20 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:20.877581 | orchestrator | 2026-04-08 01:02:20 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:20.877642 | orchestrator | 2026-04-08 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:23.904553 | orchestrator | 2026-04-08 01:02:23 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:23.905670 | orchestrator | 2026-04-08 01:02:23 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:23.906259 | orchestrator | 2026-04-08 01:02:23 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:23.907790 | orchestrator | 2026-04-08 01:02:23 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:23.907824 | orchestrator | 2026-04-08 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:26.932167 | orchestrator | 2026-04-08 01:02:26 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:26.932298 | orchestrator | 2026-04-08 01:02:26 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:26.933000 | orchestrator | 2026-04-08 01:02:26 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:26.933448 | orchestrator | 2026-04-08 01:02:26 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:26.933488 | orchestrator | 2026-04-08 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:29.958393 | orchestrator | 2026-04-08 01:02:29 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:29.959359 | orchestrator | 2026-04-08 01:02:29 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:29.959851 | orchestrator | 2026-04-08 01:02:29 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:29.960577 | orchestrator | 2026-04-08 01:02:29 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:29.960626 | orchestrator | 2026-04-08 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:32.991426 | orchestrator | 2026-04-08 01:02:32 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:32.991505 | orchestrator | 2026-04-08 01:02:32 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:32.992353 | orchestrator | 2026-04-08 01:02:32 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:32.993396 | orchestrator | 2026-04-08 01:02:32 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:32.993443 | orchestrator | 2026-04-08 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:36.031492 | orchestrator | 2026-04-08 01:02:36 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:36.031581 | orchestrator | 2026-04-08 01:02:36 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:36.032215 | orchestrator | 2026-04-08 01:02:36 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:36.032859 | orchestrator | 2026-04-08 01:02:36 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:36.032894 | orchestrator | 2026-04-08 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:39.069488 | orchestrator | 2026-04-08 01:02:39 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:39.070202 | orchestrator | 2026-04-08 01:02:39 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:39.070519 | orchestrator | 2026-04-08 01:02:39 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:39.071171 | orchestrator | 2026-04-08 01:02:39 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:39.071185 | orchestrator | 2026-04-08 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:42.108295 | orchestrator | 2026-04-08 01:02:42 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:42.108918 | orchestrator | 2026-04-08 01:02:42 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:42.109679 | orchestrator | 2026-04-08 01:02:42 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:42.110277 | orchestrator | 2026-04-08 01:02:42 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:42.110304 | orchestrator | 2026-04-08 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:45.137201 | orchestrator | 2026-04-08 01:02:45 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:45.137293 | orchestrator | 2026-04-08 01:02:45 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:45.137916 | orchestrator | 2026-04-08 01:02:45 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:45.138417 | orchestrator | 2026-04-08 01:02:45 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:45.138464 | orchestrator | 2026-04-08 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:48.161570 | orchestrator | 2026-04-08 01:02:48 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:48.161644 | orchestrator | 2026-04-08 01:02:48 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:48.163400 | orchestrator | 2026-04-08 01:02:48 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:48.163692 | orchestrator | 2026-04-08 01:02:48 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:48.163716 | orchestrator | 2026-04-08 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:51.190349 | orchestrator | 2026-04-08 01:02:51 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:51.190469 | orchestrator | 2026-04-08 01:02:51 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:51.191019 | orchestrator | 2026-04-08 01:02:51 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:51.192186 | orchestrator | 2026-04-08 01:02:51 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:51.192222 | orchestrator | 2026-04-08 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:54.216907 | orchestrator | 2026-04-08 01:02:54 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:54.217613 | orchestrator | 2026-04-08 01:02:54 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:54.218708 | orchestrator | 2026-04-08 01:02:54 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:54.219728 | orchestrator | 2026-04-08 01:02:54 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:54.219767 | orchestrator | 2026-04-08 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:57.244806 | orchestrator | 2026-04-08 01:02:57 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:02:57.244896 | orchestrator | 2026-04-08 01:02:57 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:02:57.245578 | orchestrator | 2026-04-08 01:02:57 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:02:57.246842 | orchestrator | 2026-04-08 01:02:57 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:02:57.246904 | orchestrator | 2026-04-08 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:00.280404 | orchestrator | 2026-04-08 01:03:00 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:00.280581 | orchestrator | 2026-04-08 01:03:00 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:00.281257 | orchestrator | 2026-04-08 01:03:00 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:00.281671 | orchestrator | 2026-04-08 01:03:00 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:03:00.281686 | orchestrator | 2026-04-08 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:03.303187 | orchestrator | 2026-04-08 01:03:03 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:03.304455 | orchestrator | 2026-04-08 01:03:03 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:03.305196 | orchestrator | 2026-04-08 01:03:03 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:03.305776 | orchestrator | 2026-04-08 01:03:03 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:03:03.305810 | orchestrator | 2026-04-08 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:06.329408 | orchestrator | 2026-04-08 01:03:06 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:06.329532 | orchestrator | 2026-04-08 01:03:06 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:06.330176 | orchestrator | 2026-04-08 01:03:06 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:06.330481 | orchestrator | 2026-04-08 01:03:06 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:03:06.330502 | orchestrator | 2026-04-08 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:09.351035 | orchestrator | 2026-04-08 01:03:09 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:09.351111 | orchestrator | 2026-04-08 01:03:09 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:09.351592 | orchestrator | 2026-04-08 01:03:09 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:09.353132 | orchestrator | 2026-04-08 01:03:09 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:03:09.353185 | orchestrator | 2026-04-08 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:12.375562 | orchestrator | 2026-04-08 01:03:12 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:12.376236 | orchestrator | 2026-04-08 01:03:12 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:12.376613 | orchestrator | 2026-04-08 01:03:12 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:12.377171 | orchestrator | 2026-04-08 01:03:12 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:03:12.377188 | orchestrator | 2026-04-08 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:15.497336 | orchestrator | 2026-04-08 01:03:15 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:15.497398 | orchestrator | 2026-04-08 01:03:15 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:15.497444 | orchestrator | 2026-04-08 01:03:15 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:15.497451 | orchestrator | 2026-04-08 01:03:15 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state STARTED 2026-04-08 01:03:15.497455 | orchestrator | 2026-04-08 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:18.444641 | orchestrator | 2026-04-08 01:03:18 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:03:18.445645 | orchestrator | 2026-04-08 01:03:18 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:18.445683 | orchestrator | 2026-04-08 01:03:18 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:18.447615 | orchestrator | 2026-04-08 01:03:18 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:18.448769 | orchestrator | 2026-04-08 01:03:18 | INFO  | Task 242ee359-364f-4171-8104-43b910aa1e3e is in state SUCCESS 2026-04-08 01:03:18.449893 | orchestrator | 2026-04-08 01:03:18.450370 | orchestrator | 2026-04-08 01:03:18.450394 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 01:03:18.450408 | orchestrator | 2026-04-08 01:03:18.450421 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 01:03:18.450435 | orchestrator | Wednesday 08 April 2026 01:01:21 +0000 (0:00:00.385) 0:00:00.385 ******* 2026-04-08 01:03:18.450448 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:03:18.450462 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:03:18.450475 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:03:18.450488 | orchestrator | 2026-04-08 01:03:18.450500 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 01:03:18.450513 | orchestrator | Wednesday 08 April 2026 01:01:21 +0000 (0:00:00.307) 0:00:00.693 ******* 2026-04-08 01:03:18.450526 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-08 01:03:18.450542 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-08 01:03:18.450554 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-08 01:03:18.450567 | orchestrator | 2026-04-08 01:03:18.450581 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-08 01:03:18.450595 | orchestrator | 2026-04-08 01:03:18.450608 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-08 01:03:18.450619 | orchestrator | Wednesday 08 April 2026 01:01:21 +0000 (0:00:00.337) 0:00:01.031 ******* 2026-04-08 01:03:18.450633 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:03:18.450647 | orchestrator | 2026-04-08 01:03:18.450659 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-08 01:03:18.450671 | orchestrator | Wednesday 08 April 2026 01:01:22 +0000 (0:00:00.647) 0:00:01.678 ******* 2026-04-08 01:03:18.450685 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-08 01:03:18.450700 | orchestrator | 2026-04-08 01:03:18.450712 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-08 01:03:18.450726 | orchestrator | Wednesday 08 April 2026 01:01:26 +0000 (0:00:03.672) 0:00:05.351 ******* 2026-04-08 01:03:18.450740 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-08 01:03:18.450754 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-08 01:03:18.450769 | orchestrator | 2026-04-08 01:03:18.450782 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-08 01:03:18.450794 | orchestrator | Wednesday 08 April 2026 01:01:33 +0000 (0:00:07.320) 0:00:12.672 ******* 2026-04-08 01:03:18.450808 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-08 01:03:18.450820 | orchestrator | 2026-04-08 01:03:18.450832 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-08 01:03:18.450845 | orchestrator | Wednesday 08 April 2026 01:01:37 +0000 (0:00:03.593) 0:00:16.265 ******* 2026-04-08 01:03:18.450855 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-08 01:03:18.450869 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-08 01:03:18.450882 | orchestrator | 2026-04-08 01:03:18.450895 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-08 01:03:18.450908 | orchestrator | Wednesday 08 April 2026 01:01:41 +0000 (0:00:04.532) 0:00:20.797 ******* 2026-04-08 01:03:18.450969 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-08 01:03:18.450976 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-08 01:03:18.450988 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-08 01:03:18.450998 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-08 01:03:18.451027 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-08 01:03:18.451033 | orchestrator | 2026-04-08 01:03:18.451039 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-08 01:03:18.451046 | orchestrator | Wednesday 08 April 2026 01:01:58 +0000 (0:00:17.235) 0:00:38.033 ******* 2026-04-08 01:03:18.451052 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-08 01:03:18.451058 | orchestrator | 2026-04-08 01:03:18.451077 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-08 01:03:18.451083 | orchestrator | Wednesday 08 April 2026 01:02:03 +0000 (0:00:04.492) 0:00:42.525 ******* 2026-04-08 01:03:18.451093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 01:03:18.451118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 01:03:18.451126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 01:03:18.451153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451196 | orchestrator | 2026-04-08 01:03:18.451202 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-08 01:03:18.451208 | orchestrator | Wednesday 08 April 2026 01:02:05 +0000 (0:00:02.257) 0:00:44.783 ******* 2026-04-08 01:03:18.451214 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-08 01:03:18.451221 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-08 01:03:18.451227 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-08 01:03:18.451235 | orchestrator | 2026-04-08 01:03:18.451242 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-08 01:03:18.451249 | orchestrator | Wednesday 08 April 2026 01:02:06 +0000 (0:00:01.371) 0:00:46.154 ******* 2026-04-08 01:03:18.451261 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:03:18.451268 | orchestrator | 2026-04-08 01:03:18.451275 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-08 01:03:18.451283 | orchestrator | Wednesday 08 April 2026 01:02:07 +0000 (0:00:00.121) 0:00:46.275 ******* 2026-04-08 01:03:18.451291 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:03:18.451297 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:03:18.451304 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:03:18.451311 | orchestrator | 2026-04-08 01:03:18.451317 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-08 01:03:18.451324 | orchestrator | Wednesday 08 April 2026 01:02:07 +0000 (0:00:00.253) 0:00:46.528 ******* 2026-04-08 01:03:18.451331 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:03:18.451338 | orchestrator | 2026-04-08 01:03:18.451344 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-08 01:03:18.451350 | orchestrator | Wednesday 08 April 2026 01:02:08 +0000 (0:00:01.020) 0:00:47.549 ******* 2026-04-08 01:03:18.451360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 01:03:18.451374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 01:03:18.451381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 01:03:18.451389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451442 | orchestrator | 2026-04-08 01:03:18.451449 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-08 01:03:18.451456 | orchestrator | Wednesday 08 April 2026 01:02:12 +0000 (0:00:03.989) 0:00:51.538 ******* 2026-04-08 01:03:18.451468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-08 01:03:18.451475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.451486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.451494 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:03:18.451505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-08 01:03:18.451512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.451519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.451534 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:03:18.451541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-08 01:03:18.451550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.451557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.451563 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:03:18.451570 | orchestrator | 2026-04-08 01:03:18.451576 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-08 01:03:18.451583 | orchestrator | Wednesday 08 April 2026 01:02:13 +0000 (0:00:00.934) 0:00:52.472 ******* 2026-04-08 01:03:18.451596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-08 01:03:18.451608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.451615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.451621 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:03:18.451628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-08 01:03:18.451642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.451650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.451657 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:03:18.451668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-08 01:03:18.451681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.451688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.451695 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:03:18.451702 | orchestrator | 2026-04-08 01:03:18.451708 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-08 01:03:18.451714 | orchestrator | Wednesday 08 April 2026 01:02:14 +0000 (0:00:01.241) 0:00:53.713 ******* 2026-04-08 01:03:18.451724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 01:03:18.451735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 01:03:18.451743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 01:03:18.451755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451810 | orchestrator | 2026-04-08 01:03:18.451816 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-08 01:03:18.451823 | orchestrator | Wednesday 08 April 2026 01:02:18 +0000 (0:00:03.548) 0:00:57.262 ******* 2026-04-08 01:03:18.451829 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:03:18.451836 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:03:18.451842 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:03:18.451848 | orchestrator | 2026-04-08 01:03:18.451854 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-08 01:03:18.451860 | orchestrator | Wednesday 08 April 2026 01:02:20 +0000 (0:00:02.129) 0:00:59.391 ******* 2026-04-08 01:03:18.451867 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 01:03:18.451874 | orchestrator | 2026-04-08 01:03:18.451881 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-08 01:03:18.451887 | orchestrator | Wednesday 08 April 2026 01:02:21 +0000 (0:00:00.850) 0:01:00.242 ******* 2026-04-08 01:03:18.451892 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:03:18.451898 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:03:18.451904 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:03:18.451910 | orchestrator | 2026-04-08 01:03:18.451937 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-08 01:03:18.451943 | orchestrator | Wednesday 08 April 2026 01:02:21 +0000 (0:00:00.749) 0:01:00.992 ******* 2026-04-08 01:03:18.451950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 01:03:18.451962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 01:03:18.451980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 01:03:18.451986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.451993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.452000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.452007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.452017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.452030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.452037 | orchestrator | 2026-04-08 01:03:18.452043 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-08 01:03:18.452050 | orchestrator | Wednesday 08 April 2026 01:02:31 +0000 (0:00:10.105) 0:01:11.097 ******* 2026-04-08 01:03:18.452062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-08 01:03:18.452069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.452076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.452083 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:03:18.452093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-08 01:03:18.452106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.452116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.452122 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:03:18.452129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-08 01:03:18.452135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.452142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:03:18.452148 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:03:18.452154 | orchestrator | 2026-04-08 01:03:18.452160 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-08 01:03:18.452166 | orchestrator | Wednesday 08 April 2026 01:02:33 +0000 (0:00:01.337) 0:01:12.435 ******* 2026-04-08 01:03:18.452180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 01:03:18.452193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 01:03:18.452199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-08 01:03:18.452205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.452211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.452227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.452233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.452246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.452253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:03:18.452259 | orchestrator | 2026-04-08 01:03:18.452265 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-08 01:03:18.452270 | orchestrator | Wednesday 08 April 2026 01:02:36 +0000 (0:00:02.984) 0:01:15.419 ******* 2026-04-08 01:03:18.452276 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:03:18.452283 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:03:18.452290 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:03:18.452296 | orchestrator | 2026-04-08 01:03:18.452303 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-08 01:03:18.452309 | orchestrator | Wednesday 08 April 2026 01:02:36 +0000 (0:00:00.493) 0:01:15.913 ******* 2026-04-08 01:03:18.452316 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:03:18.452322 | orchestrator | 2026-04-08 01:03:18.452328 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-08 01:03:18.452335 | orchestrator | Wednesday 08 April 2026 01:02:39 +0000 (0:00:02.756) 0:01:18.669 ******* 2026-04-08 01:03:18.452342 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:03:18.452348 | orchestrator | 2026-04-08 01:03:18.452354 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-08 01:03:18.452361 | orchestrator | Wednesday 08 April 2026 01:02:42 +0000 (0:00:02.616) 0:01:21.286 ******* 2026-04-08 01:03:18.452368 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:03:18.452373 | orchestrator | 2026-04-08 01:03:18.452380 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-08 01:03:18.452391 | orchestrator | Wednesday 08 April 2026 01:02:55 +0000 (0:00:13.320) 0:01:34.606 ******* 2026-04-08 01:03:18.452398 | orchestrator | 2026-04-08 01:03:18.452405 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-08 01:03:18.452412 | orchestrator | Wednesday 08 April 2026 01:02:56 +0000 (0:00:00.656) 0:01:35.263 ******* 2026-04-08 01:03:18.452418 | orchestrator | 2026-04-08 01:03:18.452424 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-08 01:03:18.452431 | orchestrator | Wednesday 08 April 2026 01:02:56 +0000 (0:00:00.175) 0:01:35.439 ******* 2026-04-08 01:03:18.452436 | orchestrator | 2026-04-08 01:03:18.452442 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-08 01:03:18.452448 | orchestrator | Wednesday 08 April 2026 01:02:56 +0000 (0:00:00.182) 0:01:35.622 ******* 2026-04-08 01:03:18.452454 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:03:18.452459 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:03:18.452466 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:03:18.452471 | orchestrator | 2026-04-08 01:03:18.452477 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-08 01:03:18.452484 | orchestrator | Wednesday 08 April 2026 01:03:04 +0000 (0:00:07.685) 0:01:43.307 ******* 2026-04-08 01:03:18.452489 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:03:18.452495 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:03:18.452501 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:03:18.452506 | orchestrator | 2026-04-08 01:03:18.452517 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-08 01:03:18.452523 | orchestrator | Wednesday 08 April 2026 01:03:10 +0000 (0:00:06.430) 0:01:49.738 ******* 2026-04-08 01:03:18.452529 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:03:18.452535 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:03:18.452543 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:03:18.452549 | orchestrator | 2026-04-08 01:03:18.452555 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:03:18.452563 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 01:03:18.452568 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-08 01:03:18.452572 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-08 01:03:18.452576 | orchestrator | 2026-04-08 01:03:18.452580 | orchestrator | 2026-04-08 01:03:18.452584 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:03:18.452587 | orchestrator | Wednesday 08 April 2026 01:03:16 +0000 (0:00:06.016) 0:01:55.755 ******* 2026-04-08 01:03:18.452591 | orchestrator | =============================================================================== 2026-04-08 01:03:18.452595 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.24s 2026-04-08 01:03:18.452602 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.32s 2026-04-08 01:03:18.452607 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.11s 2026-04-08 01:03:18.452610 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.69s 2026-04-08 01:03:18.452614 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.32s 2026-04-08 01:03:18.452618 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.43s 2026-04-08 01:03:18.452622 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.02s 2026-04-08 01:03:18.452626 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.53s 2026-04-08 01:03:18.452630 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.49s 2026-04-08 01:03:18.452641 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.99s 2026-04-08 01:03:18.452645 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.67s 2026-04-08 01:03:18.452649 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.59s 2026-04-08 01:03:18.452653 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.55s 2026-04-08 01:03:18.452657 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.98s 2026-04-08 01:03:18.452661 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.76s 2026-04-08 01:03:18.452664 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.62s 2026-04-08 01:03:18.452668 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.26s 2026-04-08 01:03:18.452672 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.13s 2026-04-08 01:03:18.452676 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.37s 2026-04-08 01:03:18.452680 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.34s 2026-04-08 01:03:18.452683 | orchestrator | 2026-04-08 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:21.476734 | orchestrator | 2026-04-08 01:03:21 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:03:21.477096 | orchestrator | 2026-04-08 01:03:21 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:21.477741 | orchestrator | 2026-04-08 01:03:21 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:21.478202 | orchestrator | 2026-04-08 01:03:21 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:21.478241 | orchestrator | 2026-04-08 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:24.503053 | orchestrator | 2026-04-08 01:03:24 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:03:24.503353 | orchestrator | 2026-04-08 01:03:24 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:24.503884 | orchestrator | 2026-04-08 01:03:24 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:24.504720 | orchestrator | 2026-04-08 01:03:24 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:24.504755 | orchestrator | 2026-04-08 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:27.522561 | orchestrator | 2026-04-08 01:03:27 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:03:27.522647 | orchestrator | 2026-04-08 01:03:27 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:27.523168 | orchestrator | 2026-04-08 01:03:27 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:27.523654 | orchestrator | 2026-04-08 01:03:27 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:27.523708 | orchestrator | 2026-04-08 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:30.544795 | orchestrator | 2026-04-08 01:03:30 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:03:30.544954 | orchestrator | 2026-04-08 01:03:30 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:30.545609 | orchestrator | 2026-04-08 01:03:30 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:30.546356 | orchestrator | 2026-04-08 01:03:30 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:30.546480 | orchestrator | 2026-04-08 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:33.582299 | orchestrator | 2026-04-08 01:03:33 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:03:33.583705 | orchestrator | 2026-04-08 01:03:33 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:33.586059 | orchestrator | 2026-04-08 01:03:33 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:33.587795 | orchestrator | 2026-04-08 01:03:33 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:33.587848 | orchestrator | 2026-04-08 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:36.617681 | orchestrator | 2026-04-08 01:03:36 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:03:36.618740 | orchestrator | 2026-04-08 01:03:36 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:36.620503 | orchestrator | 2026-04-08 01:03:36 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:36.621889 | orchestrator | 2026-04-08 01:03:36 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:36.622072 | orchestrator | 2026-04-08 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:39.656568 | orchestrator | 2026-04-08 01:03:39 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:03:39.657110 | orchestrator | 2026-04-08 01:03:39 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:39.658070 | orchestrator | 2026-04-08 01:03:39 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:39.658756 | orchestrator | 2026-04-08 01:03:39 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:39.658803 | orchestrator | 2026-04-08 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:42.697298 | orchestrator | 2026-04-08 01:03:42 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:03:42.697354 | orchestrator | 2026-04-08 01:03:42 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:42.698629 | orchestrator | 2026-04-08 01:03:42 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:42.700122 | orchestrator | 2026-04-08 01:03:42 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:42.700177 | orchestrator | 2026-04-08 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:45.737275 | orchestrator | 2026-04-08 01:03:45 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:03:45.737786 | orchestrator | 2026-04-08 01:03:45 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:45.739078 | orchestrator | 2026-04-08 01:03:45 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:45.739834 | orchestrator | 2026-04-08 01:03:45 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:45.739859 | orchestrator | 2026-04-08 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:48.772447 | orchestrator | 2026-04-08 01:03:48 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:03:48.773809 | orchestrator | 2026-04-08 01:03:48 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:48.776695 | orchestrator | 2026-04-08 01:03:48 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:48.777430 | orchestrator | 2026-04-08 01:03:48 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:48.777470 | orchestrator | 2026-04-08 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:51.816018 | orchestrator | 2026-04-08 01:03:51 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:03:51.816768 | orchestrator | 2026-04-08 01:03:51 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:51.818673 | orchestrator | 2026-04-08 01:03:51 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:51.819352 | orchestrator | 2026-04-08 01:03:51 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:51.819619 | orchestrator | 2026-04-08 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:54.864736 | orchestrator | 2026-04-08 01:03:54 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:03:54.865036 | orchestrator | 2026-04-08 01:03:54 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:54.865932 | orchestrator | 2026-04-08 01:03:54 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:54.868009 | orchestrator | 2026-04-08 01:03:54 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:54.868079 | orchestrator | 2026-04-08 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:57.905725 | orchestrator | 2026-04-08 01:03:57 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:03:57.906983 | orchestrator | 2026-04-08 01:03:57 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:03:57.908353 | orchestrator | 2026-04-08 01:03:57 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:03:57.909208 | orchestrator | 2026-04-08 01:03:57 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:03:57.909247 | orchestrator | 2026-04-08 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:00.944692 | orchestrator | 2026-04-08 01:04:00 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:04:00.946375 | orchestrator | 2026-04-08 01:04:00 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:00.947178 | orchestrator | 2026-04-08 01:04:00 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:00.947855 | orchestrator | 2026-04-08 01:04:00 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:00.947997 | orchestrator | 2026-04-08 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:03.993042 | orchestrator | 2026-04-08 01:04:03 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:04:03.996242 | orchestrator | 2026-04-08 01:04:03 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:03.998265 | orchestrator | 2026-04-08 01:04:03 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:04.000678 | orchestrator | 2026-04-08 01:04:04 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:04.000738 | orchestrator | 2026-04-08 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:07.061176 | orchestrator | 2026-04-08 01:04:07 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:04:07.061257 | orchestrator | 2026-04-08 01:04:07 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:07.064599 | orchestrator | 2026-04-08 01:04:07 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:07.065628 | orchestrator | 2026-04-08 01:04:07 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:07.065656 | orchestrator | 2026-04-08 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:10.102547 | orchestrator | 2026-04-08 01:04:10 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:04:10.103002 | orchestrator | 2026-04-08 01:04:10 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:10.103295 | orchestrator | 2026-04-08 01:04:10 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:10.104162 | orchestrator | 2026-04-08 01:04:10 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:10.104210 | orchestrator | 2026-04-08 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:13.149759 | orchestrator | 2026-04-08 01:04:13 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:04:13.151177 | orchestrator | 2026-04-08 01:04:13 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:13.153704 | orchestrator | 2026-04-08 01:04:13 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:13.156508 | orchestrator | 2026-04-08 01:04:13 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:13.157200 | orchestrator | 2026-04-08 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:16.196590 | orchestrator | 2026-04-08 01:04:16 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:04:16.197166 | orchestrator | 2026-04-08 01:04:16 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:16.198837 | orchestrator | 2026-04-08 01:04:16 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:16.199584 | orchestrator | 2026-04-08 01:04:16 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:16.199756 | orchestrator | 2026-04-08 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:19.238308 | orchestrator | 2026-04-08 01:04:19 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:04:19.238398 | orchestrator | 2026-04-08 01:04:19 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:19.238409 | orchestrator | 2026-04-08 01:04:19 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:19.238416 | orchestrator | 2026-04-08 01:04:19 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:19.238423 | orchestrator | 2026-04-08 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:22.269155 | orchestrator | 2026-04-08 01:04:22 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:04:22.269669 | orchestrator | 2026-04-08 01:04:22 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:22.271488 | orchestrator | 2026-04-08 01:04:22 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:22.272816 | orchestrator | 2026-04-08 01:04:22 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:22.272934 | orchestrator | 2026-04-08 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:25.297572 | orchestrator | 2026-04-08 01:04:25 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state STARTED 2026-04-08 01:04:25.299387 | orchestrator | 2026-04-08 01:04:25 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:25.300082 | orchestrator | 2026-04-08 01:04:25 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:25.300648 | orchestrator | 2026-04-08 01:04:25 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:25.300752 | orchestrator | 2026-04-08 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:28.321482 | orchestrator | 2026-04-08 01:04:28 | INFO  | Task d343e214-e460-4549-91f8-f7c42bf31c07 is in state SUCCESS 2026-04-08 01:04:28.321610 | orchestrator | 2026-04-08 01:04:28 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:28.322237 | orchestrator | 2026-04-08 01:04:28 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:28.323788 | orchestrator | 2026-04-08 01:04:28 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:04:28.324420 | orchestrator | 2026-04-08 01:04:28 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:28.324474 | orchestrator | 2026-04-08 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:31.348240 | orchestrator | 2026-04-08 01:04:31 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:31.348554 | orchestrator | 2026-04-08 01:04:31 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:31.349476 | orchestrator | 2026-04-08 01:04:31 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:04:31.350785 | orchestrator | 2026-04-08 01:04:31 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:31.350837 | orchestrator | 2026-04-08 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:34.380151 | orchestrator | 2026-04-08 01:04:34 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:34.381309 | orchestrator | 2026-04-08 01:04:34 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:34.383102 | orchestrator | 2026-04-08 01:04:34 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:04:34.384873 | orchestrator | 2026-04-08 01:04:34 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:34.385030 | orchestrator | 2026-04-08 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:37.416470 | orchestrator | 2026-04-08 01:04:37 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:37.417051 | orchestrator | 2026-04-08 01:04:37 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:37.419110 | orchestrator | 2026-04-08 01:04:37 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:04:37.419947 | orchestrator | 2026-04-08 01:04:37 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:37.419973 | orchestrator | 2026-04-08 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:40.484726 | orchestrator | 2026-04-08 01:04:40 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:40.485277 | orchestrator | 2026-04-08 01:04:40 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:40.486196 | orchestrator | 2026-04-08 01:04:40 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:04:40.486639 | orchestrator | 2026-04-08 01:04:40 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:40.486655 | orchestrator | 2026-04-08 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:43.546153 | orchestrator | 2026-04-08 01:04:43 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:43.546208 | orchestrator | 2026-04-08 01:04:43 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:43.546218 | orchestrator | 2026-04-08 01:04:43 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:04:43.546225 | orchestrator | 2026-04-08 01:04:43 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:43.546232 | orchestrator | 2026-04-08 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:46.566591 | orchestrator | 2026-04-08 01:04:46 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:46.566926 | orchestrator | 2026-04-08 01:04:46 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:46.570192 | orchestrator | 2026-04-08 01:04:46 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:04:46.570551 | orchestrator | 2026-04-08 01:04:46 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:46.570606 | orchestrator | 2026-04-08 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:49.593342 | orchestrator | 2026-04-08 01:04:49 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:49.593537 | orchestrator | 2026-04-08 01:04:49 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:49.594459 | orchestrator | 2026-04-08 01:04:49 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:04:49.595176 | orchestrator | 2026-04-08 01:04:49 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:49.595606 | orchestrator | 2026-04-08 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:52.614658 | orchestrator | 2026-04-08 01:04:52 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:52.615914 | orchestrator | 2026-04-08 01:04:52 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:52.616331 | orchestrator | 2026-04-08 01:04:52 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:04:52.617007 | orchestrator | 2026-04-08 01:04:52 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:52.617074 | orchestrator | 2026-04-08 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:55.652570 | orchestrator | 2026-04-08 01:04:55 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:55.653320 | orchestrator | 2026-04-08 01:04:55 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:55.654299 | orchestrator | 2026-04-08 01:04:55 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:04:55.655493 | orchestrator | 2026-04-08 01:04:55 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:55.655595 | orchestrator | 2026-04-08 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:58.699612 | orchestrator | 2026-04-08 01:04:58 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:04:58.701271 | orchestrator | 2026-04-08 01:04:58 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:04:58.702119 | orchestrator | 2026-04-08 01:04:58 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:04:58.703636 | orchestrator | 2026-04-08 01:04:58 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:04:58.703677 | orchestrator | 2026-04-08 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:01.743119 | orchestrator | 2026-04-08 01:05:01 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:05:01.744086 | orchestrator | 2026-04-08 01:05:01 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:01.744122 | orchestrator | 2026-04-08 01:05:01 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:05:01.745154 | orchestrator | 2026-04-08 01:05:01 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:05:01.745188 | orchestrator | 2026-04-08 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:04.775656 | orchestrator | 2026-04-08 01:05:04 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state STARTED 2026-04-08 01:05:04.776302 | orchestrator | 2026-04-08 01:05:04 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:04.777815 | orchestrator | 2026-04-08 01:05:04 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:05:04.778728 | orchestrator | 2026-04-08 01:05:04 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:05:04.778755 | orchestrator | 2026-04-08 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:07.820611 | orchestrator | 2026-04-08 01:05:07 | INFO  | Task a076ccfb-d4e2-465e-a979-1b44a06fe1d7 is in state SUCCESS 2026-04-08 01:05:07.824292 | orchestrator | 2026-04-08 01:05:07.824340 | orchestrator | 2026-04-08 01:05:07.824347 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-08 01:05:07.824353 | orchestrator | 2026-04-08 01:05:07.824359 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-08 01:05:07.824365 | orchestrator | Wednesday 08 April 2026 01:03:21 +0000 (0:00:00.190) 0:00:00.190 ******* 2026-04-08 01:05:07.824371 | orchestrator | changed: [localhost] 2026-04-08 01:05:07.824377 | orchestrator | 2026-04-08 01:05:07.824382 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-08 01:05:07.824387 | orchestrator | Wednesday 08 April 2026 01:03:22 +0000 (0:00:00.792) 0:00:00.983 ******* 2026-04-08 01:05:07.824392 | orchestrator | changed: [localhost] 2026-04-08 01:05:07.824398 | orchestrator | 2026-04-08 01:05:07.824403 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-08 01:05:07.824408 | orchestrator | Wednesday 08 April 2026 01:03:55 +0000 (0:00:33.156) 0:00:34.139 ******* 2026-04-08 01:05:07.824414 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-04-08 01:05:07.824419 | orchestrator | changed: [localhost] 2026-04-08 01:05:07.824424 | orchestrator | 2026-04-08 01:05:07.824429 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 01:05:07.824435 | orchestrator | 2026-04-08 01:05:07.824440 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 01:05:07.824445 | orchestrator | Wednesday 08 April 2026 01:04:23 +0000 (0:00:28.142) 0:01:02.281 ******* 2026-04-08 01:05:07.824450 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:05:07.824455 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:05:07.824461 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:05:07.824479 | orchestrator | 2026-04-08 01:05:07.824485 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 01:05:07.824544 | orchestrator | Wednesday 08 April 2026 01:04:24 +0000 (0:00:00.508) 0:01:02.790 ******* 2026-04-08 01:05:07.824577 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-08 01:05:07.824583 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-08 01:05:07.824595 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-08 01:05:07.824601 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-08 01:05:07.824606 | orchestrator | 2026-04-08 01:05:07.824611 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-08 01:05:07.824616 | orchestrator | skipping: no hosts matched 2026-04-08 01:05:07.824622 | orchestrator | 2026-04-08 01:05:07.824627 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:05:07.824633 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 01:05:07.824639 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 01:05:07.824646 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 01:05:07.824695 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 01:05:07.824701 | orchestrator | 2026-04-08 01:05:07.824706 | orchestrator | 2026-04-08 01:05:07.824711 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:05:07.824716 | orchestrator | Wednesday 08 April 2026 01:04:25 +0000 (0:00:01.061) 0:01:03.852 ******* 2026-04-08 01:05:07.824893 | orchestrator | =============================================================================== 2026-04-08 01:05:07.824901 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 33.16s 2026-04-08 01:05:07.824906 | orchestrator | Download ironic-agent kernel ------------------------------------------- 28.14s 2026-04-08 01:05:07.824911 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.06s 2026-04-08 01:05:07.824916 | orchestrator | Ensure the destination directory exists --------------------------------- 0.79s 2026-04-08 01:05:07.824922 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.51s 2026-04-08 01:05:07.824931 | orchestrator | 2026-04-08 01:05:07.824939 | orchestrator | 2026-04-08 01:05:07.824948 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 01:05:07.824956 | orchestrator | 2026-04-08 01:05:07.825136 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 01:05:07.825151 | orchestrator | Wednesday 08 April 2026 01:01:52 +0000 (0:00:00.271) 0:00:00.271 ******* 2026-04-08 01:05:07.825159 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:05:07.825168 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:05:07.825176 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:05:07.825185 | orchestrator | 2026-04-08 01:05:07.825193 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 01:05:07.825202 | orchestrator | Wednesday 08 April 2026 01:01:53 +0000 (0:00:00.302) 0:00:00.573 ******* 2026-04-08 01:05:07.825211 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-08 01:05:07.825220 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-08 01:05:07.825228 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-08 01:05:07.825237 | orchestrator | 2026-04-08 01:05:07.825246 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-08 01:05:07.825252 | orchestrator | 2026-04-08 01:05:07.825257 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-08 01:05:07.825262 | orchestrator | Wednesday 08 April 2026 01:01:53 +0000 (0:00:00.258) 0:00:00.832 ******* 2026-04-08 01:05:07.825276 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:05:07.825282 | orchestrator | 2026-04-08 01:05:07.825287 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-08 01:05:07.825317 | orchestrator | Wednesday 08 April 2026 01:01:54 +0000 (0:00:00.742) 0:00:01.575 ******* 2026-04-08 01:05:07.825323 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-08 01:05:07.825328 | orchestrator | 2026-04-08 01:05:07.825333 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-08 01:05:07.825338 | orchestrator | Wednesday 08 April 2026 01:01:58 +0000 (0:00:04.444) 0:00:06.019 ******* 2026-04-08 01:05:07.825354 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-08 01:05:07.825360 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-08 01:05:07.825365 | orchestrator | 2026-04-08 01:05:07.825370 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-08 01:05:07.825376 | orchestrator | Wednesday 08 April 2026 01:02:05 +0000 (0:00:07.200) 0:00:13.220 ******* 2026-04-08 01:05:07.825381 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-08 01:05:07.825386 | orchestrator | 2026-04-08 01:05:07.825391 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-08 01:05:07.825396 | orchestrator | Wednesday 08 April 2026 01:02:09 +0000 (0:00:03.978) 0:00:17.198 ******* 2026-04-08 01:05:07.825402 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-08 01:05:07.825407 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-08 01:05:07.825412 | orchestrator | 2026-04-08 01:05:07.825417 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-08 01:05:07.825422 | orchestrator | Wednesday 08 April 2026 01:02:14 +0000 (0:00:04.668) 0:00:21.866 ******* 2026-04-08 01:05:07.825427 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-08 01:05:07.825433 | orchestrator | 2026-04-08 01:05:07.825438 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-08 01:05:07.825443 | orchestrator | Wednesday 08 April 2026 01:02:18 +0000 (0:00:04.083) 0:00:25.950 ******* 2026-04-08 01:05:07.825453 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-08 01:05:07.825459 | orchestrator | 2026-04-08 01:05:07.825464 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-08 01:05:07.825469 | orchestrator | Wednesday 08 April 2026 01:02:23 +0000 (0:00:04.623) 0:00:30.573 ******* 2026-04-08 01:05:07.825476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 01:05:07.825483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 01:05:07.825519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 01:05:07.825530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825658 | orchestrator | 2026-04-08 01:05:07.825663 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-08 01:05:07.825669 | orchestrator | Wednesday 08 April 2026 01:02:29 +0000 (0:00:06.026) 0:00:36.600 ******* 2026-04-08 01:05:07.825674 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:07.825679 | orchestrator | 2026-04-08 01:05:07.825684 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-08 01:05:07.825690 | orchestrator | Wednesday 08 April 2026 01:02:29 +0000 (0:00:00.266) 0:00:36.866 ******* 2026-04-08 01:05:07.825695 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:07.825702 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:07.825708 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:07.825713 | orchestrator | 2026-04-08 01:05:07.825718 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-08 01:05:07.825723 | orchestrator | Wednesday 08 April 2026 01:02:30 +0000 (0:00:00.583) 0:00:37.451 ******* 2026-04-08 01:05:07.825728 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:05:07.825733 | orchestrator | 2026-04-08 01:05:07.825738 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-08 01:05:07.825743 | orchestrator | Wednesday 08 April 2026 01:02:30 +0000 (0:00:00.549) 0:00:38.000 ******* 2026-04-08 01:05:07.825749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 01:05:07.825758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 01:05:07.825776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 01:05:07.825784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.825950 | orchestrator | 2026-04-08 01:05:07.825959 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-08 01:05:07.825969 | orchestrator | Wednesday 08 April 2026 01:02:39 +0000 (0:00:08.989) 0:00:46.990 ******* 2026-04-08 01:05:07.825988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 01:05:07.825998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 01:05:07.826007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826098 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:07.826115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 01:05:07.826128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 01:05:07.826139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826186 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:07.826199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 01:05:07.826209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 01:05:07.826219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826266 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:07.826271 | orchestrator | 2026-04-08 01:05:07.826276 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-08 01:05:07.826282 | orchestrator | Wednesday 08 April 2026 01:02:40 +0000 (0:00:01.147) 0:00:48.138 ******* 2026-04-08 01:05:07.826290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 01:05:07.826296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 01:05:07.826301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826330 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:07.826338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 01:05:07.826343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 01:05:07.826348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826377 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:07.826384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 01:05:07.826390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 01:05:07.826396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826424 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:07.826429 | orchestrator | 2026-04-08 01:05:07.826434 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-08 01:05:07.826440 | orchestrator | Wednesday 08 April 2026 01:02:41 +0000 (0:00:01.045) 0:00:49.183 ******* 2026-04-08 01:05:07.826448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 01:05:07.826454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 01:05:07.826459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 01:05:07.826467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826648 | orchestrator | 2026-04-08 01:05:07.826657 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-08 01:05:07.826666 | orchestrator | Wednesday 08 April 2026 01:02:49 +0000 (0:00:07.683) 0:00:56.867 ******* 2026-04-08 01:05:07.826675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 01:05:07.826688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 01:05:07.826694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 01:05:07.826699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826802 | orchestrator | 2026-04-08 01:05:07.826808 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-08 01:05:07.826873 | orchestrator | Wednesday 08 April 2026 01:03:11 +0000 (0:00:21.881) 0:01:18.748 ******* 2026-04-08 01:05:07.826885 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-08 01:05:07.826892 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-08 01:05:07.826901 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-08 01:05:07.826906 | orchestrator | 2026-04-08 01:05:07.826911 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-08 01:05:07.826916 | orchestrator | Wednesday 08 April 2026 01:03:17 +0000 (0:00:05.641) 0:01:24.390 ******* 2026-04-08 01:05:07.826921 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-08 01:05:07.826926 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-08 01:05:07.826932 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-08 01:05:07.826937 | orchestrator | 2026-04-08 01:05:07.826942 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-08 01:05:07.826947 | orchestrator | Wednesday 08 April 2026 01:03:20 +0000 (0:00:03.719) 0:01:28.109 ******* 2026-04-08 01:05:07.826956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 01:05:07.826962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 01:05:07.826967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 01:05:07.826976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.826986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.826999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.827005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.827010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.827019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.827024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.827033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.827038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.827048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.827053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.827062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.827067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.827075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.827080 | orchestrator | 2026-04-08 01:05:07.827085 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-08 01:05:07.827091 | orchestrator | Wednesday 08 April 2026 01:03:24 +0000 (0:00:03.353) 0:01:31.462 ******* 2026-04-08 01:05:07.827096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 01:05:07.827101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 01:05:07.827113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 01:05:07.827190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.827213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.827229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.827236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.827244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.827255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.827261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.854804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.854891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.854902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.854908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.854923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.854941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.854956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.854962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.854968 | orchestrator | 2026-04-08 01:05:07.854975 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-08 01:05:07.854981 | orchestrator | Wednesday 08 April 2026 01:03:27 +0000 (0:00:03.617) 0:01:35.079 ******* 2026-04-08 01:05:07.854986 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:07.854992 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:07.855000 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:07.855012 | orchestrator | 2026-04-08 01:05:07.855024 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-08 01:05:07.855032 | orchestrator | Wednesday 08 April 2026 01:03:28 +0000 (0:00:00.247) 0:01:35.327 ******* 2026-04-08 01:05:07.855041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 01:05:07.855055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 01:05:07.855070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.855079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.855095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.855104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.855113 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:07.855122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 01:05:07.855130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 01:05:07.855149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-08 01:05:07.855158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.855172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 01:05:07.855182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.855191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.855200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.855219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.855229 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:07.855238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.855247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.855262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:05:07.855270 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:07.855278 | orchestrator | 2026-04-08 01:05:07.855283 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-08 01:05:07.855289 | orchestrator | Wednesday 08 April 2026 01:03:29 +0000 (0:00:01.008) 0:01:36.335 ******* 2026-04-08 01:05:07.855294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 01:05:07.855304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 01:05:07.855313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-08 01:05:07.855318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.855341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.855347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-08 01:05:07.855352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.855361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.855372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.855378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.855387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.855393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.855398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.855406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.855412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.855421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.855428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.855439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:05:07.855445 | orchestrator | 2026-04-08 01:05:07.855452 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-08 01:05:07.855459 | orchestrator | Wednesday 08 April 2026 01:03:35 +0000 (0:00:06.521) 0:01:42.856 ******* 2026-04-08 01:05:07.855465 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:07.855471 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:07.855478 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:07.855483 | orchestrator | 2026-04-08 01:05:07.855490 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-08 01:05:07.855496 | orchestrator | Wednesday 08 April 2026 01:03:36 +0000 (0:00:00.422) 0:01:43.279 ******* 2026-04-08 01:05:07.855503 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-08 01:05:07.855509 | orchestrator | 2026-04-08 01:05:07.855516 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-08 01:05:07.855533 | orchestrator | Wednesday 08 April 2026 01:03:38 +0000 (0:00:02.509) 0:01:45.788 ******* 2026-04-08 01:05:07.855548 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-08 01:05:07.855558 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-08 01:05:07.855567 | orchestrator | 2026-04-08 01:05:07.855576 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-08 01:05:07.855584 | orchestrator | Wednesday 08 April 2026 01:03:41 +0000 (0:00:02.634) 0:01:48.423 ******* 2026-04-08 01:05:07.855593 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:07.855601 | orchestrator | 2026-04-08 01:05:07.855610 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-08 01:05:07.855619 | orchestrator | Wednesday 08 April 2026 01:03:54 +0000 (0:00:13.574) 0:02:01.997 ******* 2026-04-08 01:05:07.855629 | orchestrator | 2026-04-08 01:05:07.855639 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-08 01:05:07.855649 | orchestrator | Wednesday 08 April 2026 01:03:54 +0000 (0:00:00.070) 0:02:02.068 ******* 2026-04-08 01:05:07.855658 | orchestrator | 2026-04-08 01:05:07.855668 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-08 01:05:07.855676 | orchestrator | Wednesday 08 April 2026 01:03:54 +0000 (0:00:00.074) 0:02:02.143 ******* 2026-04-08 01:05:07.855682 | orchestrator | 2026-04-08 01:05:07.855688 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-08 01:05:07.855694 | orchestrator | Wednesday 08 April 2026 01:03:54 +0000 (0:00:00.076) 0:02:02.219 ******* 2026-04-08 01:05:07.855701 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:05:07.855707 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:07.855713 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:05:07.855720 | orchestrator | 2026-04-08 01:05:07.855726 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-08 01:05:07.855732 | orchestrator | Wednesday 08 April 2026 01:04:07 +0000 (0:00:12.649) 0:02:14.869 ******* 2026-04-08 01:05:07.855739 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:07.855745 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:05:07.855752 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:05:07.855758 | orchestrator | 2026-04-08 01:05:07.855764 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-08 01:05:07.855770 | orchestrator | Wednesday 08 April 2026 01:04:18 +0000 (0:00:11.202) 0:02:26.072 ******* 2026-04-08 01:05:07.855776 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:07.855781 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:05:07.855786 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:05:07.855791 | orchestrator | 2026-04-08 01:05:07.855796 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-08 01:05:07.855804 | orchestrator | Wednesday 08 April 2026 01:04:25 +0000 (0:00:06.253) 0:02:32.326 ******* 2026-04-08 01:05:07.855810 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:07.855831 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:05:07.855837 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:05:07.855842 | orchestrator | 2026-04-08 01:05:07.855847 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-08 01:05:07.855853 | orchestrator | Wednesday 08 April 2026 01:04:37 +0000 (0:00:12.537) 0:02:44.864 ******* 2026-04-08 01:05:07.855858 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:07.855863 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:05:07.855868 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:05:07.855873 | orchestrator | 2026-04-08 01:05:07.855878 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-08 01:05:07.855883 | orchestrator | Wednesday 08 April 2026 01:04:49 +0000 (0:00:12.092) 0:02:56.956 ******* 2026-04-08 01:05:07.855888 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:07.855893 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:05:07.855899 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:05:07.855909 | orchestrator | 2026-04-08 01:05:07.855914 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-08 01:05:07.855919 | orchestrator | Wednesday 08 April 2026 01:04:57 +0000 (0:00:08.133) 0:03:05.090 ******* 2026-04-08 01:05:07.855924 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:07.855929 | orchestrator | 2026-04-08 01:05:07.855935 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:05:07.855941 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 01:05:07.855947 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-08 01:05:07.855952 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-08 01:05:07.855957 | orchestrator | 2026-04-08 01:05:07.855962 | orchestrator | 2026-04-08 01:05:07.855975 | orchestrator | TASKS RECAP *****************************************************************2026-04-08 01:05:07 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:07.855985 | orchestrator | 2026-04-08 01:05:07 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:07.855993 | orchestrator | 2026-04-08 01:05:07 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:05:07.856001 | orchestrator | 2026-04-08 01:05:07 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:05:07.856009 | orchestrator | 2026-04-08 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:07.856018 | orchestrator | *** 2026-04-08 01:05:07.856027 | orchestrator | Wednesday 08 April 2026 01:05:05 +0000 (0:00:07.198) 0:03:12.288 ******* 2026-04-08 01:05:07.856035 | orchestrator | =============================================================================== 2026-04-08 01:05:07.856044 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.88s 2026-04-08 01:05:07.856051 | orchestrator | designate : Running Designate bootstrap container ---------------------- 13.57s 2026-04-08 01:05:07.856058 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.65s 2026-04-08 01:05:07.856066 | orchestrator | designate : Restart designate-producer container ----------------------- 12.54s 2026-04-08 01:05:07.856074 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.09s 2026-04-08 01:05:07.856083 | orchestrator | designate : Restart designate-api container ---------------------------- 11.20s 2026-04-08 01:05:07.856091 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 8.99s 2026-04-08 01:05:07.856100 | orchestrator | designate : Restart designate-worker container -------------------------- 8.13s 2026-04-08 01:05:07.856108 | orchestrator | designate : Copying over config.json files for services ----------------- 7.68s 2026-04-08 01:05:07.856117 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.20s 2026-04-08 01:05:07.856126 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.20s 2026-04-08 01:05:07.856135 | orchestrator | designate : Check designate containers ---------------------------------- 6.52s 2026-04-08 01:05:07.856143 | orchestrator | designate : Restart designate-central container ------------------------- 6.25s 2026-04-08 01:05:07.856149 | orchestrator | designate : Ensuring config directories exist --------------------------- 6.03s 2026-04-08 01:05:07.856154 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.64s 2026-04-08 01:05:07.856159 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.67s 2026-04-08 01:05:07.856164 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.62s 2026-04-08 01:05:07.856169 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.44s 2026-04-08 01:05:07.856179 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 4.08s 2026-04-08 01:05:07.856184 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.98s 2026-04-08 01:05:10.878745 | orchestrator | 2026-04-08 01:05:10 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:10.881609 | orchestrator | 2026-04-08 01:05:10 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:10.883111 | orchestrator | 2026-04-08 01:05:10 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:05:10.884143 | orchestrator | 2026-04-08 01:05:10 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:05:10.884190 | orchestrator | 2026-04-08 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:13.913102 | orchestrator | 2026-04-08 01:05:13 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:13.914511 | orchestrator | 2026-04-08 01:05:13 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:13.919692 | orchestrator | 2026-04-08 01:05:13 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:05:13.921128 | orchestrator | 2026-04-08 01:05:13 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:05:13.921626 | orchestrator | 2026-04-08 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:16.966969 | orchestrator | 2026-04-08 01:05:16 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:16.967520 | orchestrator | 2026-04-08 01:05:16 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:16.969970 | orchestrator | 2026-04-08 01:05:16 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:05:16.972668 | orchestrator | 2026-04-08 01:05:16 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:05:16.972698 | orchestrator | 2026-04-08 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:20.016257 | orchestrator | 2026-04-08 01:05:20 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:20.018610 | orchestrator | 2026-04-08 01:05:20 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:20.022414 | orchestrator | 2026-04-08 01:05:20 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:05:20.024438 | orchestrator | 2026-04-08 01:05:20 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:05:20.024489 | orchestrator | 2026-04-08 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:23.104330 | orchestrator | 2026-04-08 01:05:23 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:23.104402 | orchestrator | 2026-04-08 01:05:23 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:23.106873 | orchestrator | 2026-04-08 01:05:23 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:05:23.106925 | orchestrator | 2026-04-08 01:05:23 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:05:23.106934 | orchestrator | 2026-04-08 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:26.149119 | orchestrator | 2026-04-08 01:05:26 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:26.151119 | orchestrator | 2026-04-08 01:05:26 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:26.152252 | orchestrator | 2026-04-08 01:05:26 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:05:26.153422 | orchestrator | 2026-04-08 01:05:26 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:05:26.153464 | orchestrator | 2026-04-08 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:29.191754 | orchestrator | 2026-04-08 01:05:29 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:29.193490 | orchestrator | 2026-04-08 01:05:29 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:29.195279 | orchestrator | 2026-04-08 01:05:29 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:05:29.196484 | orchestrator | 2026-04-08 01:05:29 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:05:29.196532 | orchestrator | 2026-04-08 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:32.240649 | orchestrator | 2026-04-08 01:05:32 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:32.242178 | orchestrator | 2026-04-08 01:05:32 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:32.243539 | orchestrator | 2026-04-08 01:05:32 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:05:32.245398 | orchestrator | 2026-04-08 01:05:32 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:05:32.245432 | orchestrator | 2026-04-08 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:35.284125 | orchestrator | 2026-04-08 01:05:35 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:35.284185 | orchestrator | 2026-04-08 01:05:35 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:35.284403 | orchestrator | 2026-04-08 01:05:35 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:05:35.285073 | orchestrator | 2026-04-08 01:05:35 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state STARTED 2026-04-08 01:05:35.285154 | orchestrator | 2026-04-08 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:38.329538 | orchestrator | 2026-04-08 01:05:38 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:38.329589 | orchestrator | 2026-04-08 01:05:38 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:38.329594 | orchestrator | 2026-04-08 01:05:38 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:05:38.329598 | orchestrator | 2026-04-08 01:05:38 | INFO  | Task 30143153-faa0-4cb8-bb55-56b4d6136148 is in state SUCCESS 2026-04-08 01:05:38.330468 | orchestrator | 2026-04-08 01:05:38.330501 | orchestrator | 2026-04-08 01:05:38.330508 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 01:05:38.330514 | orchestrator | 2026-04-08 01:05:38.330519 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 01:05:38.330524 | orchestrator | Wednesday 08 April 2026 01:01:15 +0000 (0:00:00.314) 0:00:00.314 ******* 2026-04-08 01:05:38.330530 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:05:38.330536 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:05:38.330542 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:05:38.330548 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:05:38.330553 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:05:38.330558 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:05:38.330564 | orchestrator | 2026-04-08 01:05:38.330570 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 01:05:38.330591 | orchestrator | Wednesday 08 April 2026 01:01:16 +0000 (0:00:00.575) 0:00:00.890 ******* 2026-04-08 01:05:38.330595 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-08 01:05:38.330599 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-08 01:05:38.330602 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-08 01:05:38.330606 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-08 01:05:38.330610 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-08 01:05:38.330615 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-08 01:05:38.330620 | orchestrator | 2026-04-08 01:05:38.330623 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-08 01:05:38.330626 | orchestrator | 2026-04-08 01:05:38.330630 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-08 01:05:38.330633 | orchestrator | Wednesday 08 April 2026 01:01:17 +0000 (0:00:00.726) 0:00:01.616 ******* 2026-04-08 01:05:38.330637 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 01:05:38.330641 | orchestrator | 2026-04-08 01:05:38.330644 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-08 01:05:38.330648 | orchestrator | Wednesday 08 April 2026 01:01:18 +0000 (0:00:01.177) 0:00:02.794 ******* 2026-04-08 01:05:38.330651 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:05:38.330654 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:05:38.330658 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:05:38.330661 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:05:38.330664 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:05:38.330667 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:05:38.330671 | orchestrator | 2026-04-08 01:05:38.330674 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-08 01:05:38.330677 | orchestrator | Wednesday 08 April 2026 01:01:19 +0000 (0:00:01.475) 0:00:04.269 ******* 2026-04-08 01:05:38.330681 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:05:38.330684 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:05:38.330690 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:05:38.330695 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:05:38.330700 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:05:38.330705 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:05:38.330711 | orchestrator | 2026-04-08 01:05:38.330716 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-08 01:05:38.330721 | orchestrator | Wednesday 08 April 2026 01:01:21 +0000 (0:00:01.334) 0:00:05.604 ******* 2026-04-08 01:05:38.330727 | orchestrator | ok: [testbed-node-0] => { 2026-04-08 01:05:38.330733 | orchestrator |  "changed": false, 2026-04-08 01:05:38.330904 | orchestrator |  "msg": "All assertions passed" 2026-04-08 01:05:38.330913 | orchestrator | } 2026-04-08 01:05:38.331171 | orchestrator | ok: [testbed-node-1] => { 2026-04-08 01:05:38.331180 | orchestrator |  "changed": false, 2026-04-08 01:05:38.331186 | orchestrator |  "msg": "All assertions passed" 2026-04-08 01:05:38.331190 | orchestrator | } 2026-04-08 01:05:38.331195 | orchestrator | ok: [testbed-node-2] => { 2026-04-08 01:05:38.331200 | orchestrator |  "changed": false, 2026-04-08 01:05:38.331205 | orchestrator |  "msg": "All assertions passed" 2026-04-08 01:05:38.331211 | orchestrator | } 2026-04-08 01:05:38.331216 | orchestrator | ok: [testbed-node-3] => { 2026-04-08 01:05:38.331221 | orchestrator |  "changed": false, 2026-04-08 01:05:38.331236 | orchestrator |  "msg": "All assertions passed" 2026-04-08 01:05:38.331242 | orchestrator | } 2026-04-08 01:05:38.331247 | orchestrator | ok: [testbed-node-4] => { 2026-04-08 01:05:38.331252 | orchestrator |  "changed": false, 2026-04-08 01:05:38.331257 | orchestrator |  "msg": "All assertions passed" 2026-04-08 01:05:38.331262 | orchestrator | } 2026-04-08 01:05:38.331268 | orchestrator | ok: [testbed-node-5] => { 2026-04-08 01:05:38.331273 | orchestrator |  "changed": false, 2026-04-08 01:05:38.331287 | orchestrator |  "msg": "All assertions passed" 2026-04-08 01:05:38.331292 | orchestrator | } 2026-04-08 01:05:38.331298 | orchestrator | 2026-04-08 01:05:38.331303 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-08 01:05:38.331309 | orchestrator | Wednesday 08 April 2026 01:01:21 +0000 (0:00:00.658) 0:00:06.262 ******* 2026-04-08 01:05:38.331315 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.331321 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.331326 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.331331 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.331336 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.331342 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.331348 | orchestrator | 2026-04-08 01:05:38.331353 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-08 01:05:38.331358 | orchestrator | Wednesday 08 April 2026 01:01:22 +0000 (0:00:00.760) 0:00:07.023 ******* 2026-04-08 01:05:38.331363 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-08 01:05:38.331369 | orchestrator | 2026-04-08 01:05:38.331374 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-08 01:05:38.331379 | orchestrator | Wednesday 08 April 2026 01:01:26 +0000 (0:00:03.583) 0:00:10.607 ******* 2026-04-08 01:05:38.331385 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-08 01:05:38.331392 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-08 01:05:38.331397 | orchestrator | 2026-04-08 01:05:38.331430 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-08 01:05:38.331437 | orchestrator | Wednesday 08 April 2026 01:01:33 +0000 (0:00:07.339) 0:00:17.946 ******* 2026-04-08 01:05:38.331443 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-08 01:05:38.331449 | orchestrator | 2026-04-08 01:05:38.331453 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-08 01:05:38.331459 | orchestrator | Wednesday 08 April 2026 01:01:37 +0000 (0:00:03.716) 0:00:21.663 ******* 2026-04-08 01:05:38.331464 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-08 01:05:38.331470 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-08 01:05:38.331475 | orchestrator | 2026-04-08 01:05:38.331480 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-08 01:05:38.331486 | orchestrator | Wednesday 08 April 2026 01:01:41 +0000 (0:00:04.419) 0:00:26.083 ******* 2026-04-08 01:05:38.331491 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-08 01:05:38.331497 | orchestrator | 2026-04-08 01:05:38.331502 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-08 01:05:38.331507 | orchestrator | Wednesday 08 April 2026 01:01:45 +0000 (0:00:03.286) 0:00:29.369 ******* 2026-04-08 01:05:38.331513 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-08 01:05:38.331518 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-08 01:05:38.331523 | orchestrator | 2026-04-08 01:05:38.331528 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-08 01:05:38.331534 | orchestrator | Wednesday 08 April 2026 01:01:53 +0000 (0:00:08.224) 0:00:37.594 ******* 2026-04-08 01:05:38.331539 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.331544 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.331549 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.331553 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.331556 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.331559 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.331563 | orchestrator | 2026-04-08 01:05:38.331566 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-08 01:05:38.331569 | orchestrator | Wednesday 08 April 2026 01:01:53 +0000 (0:00:00.493) 0:00:38.088 ******* 2026-04-08 01:05:38.331578 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.331581 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.331584 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.331588 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.331591 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.331595 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.331598 | orchestrator | 2026-04-08 01:05:38.331601 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-08 01:05:38.331605 | orchestrator | Wednesday 08 April 2026 01:01:55 +0000 (0:00:02.037) 0:00:40.125 ******* 2026-04-08 01:05:38.331608 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:05:38.331611 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:05:38.331615 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:05:38.331618 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:05:38.331621 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:05:38.331625 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:05:38.331628 | orchestrator | 2026-04-08 01:05:38.331631 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-08 01:05:38.331635 | orchestrator | Wednesday 08 April 2026 01:01:56 +0000 (0:00:00.883) 0:00:41.008 ******* 2026-04-08 01:05:38.331638 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.331641 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.331644 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.331648 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.331651 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.331654 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.331657 | orchestrator | 2026-04-08 01:05:38.331661 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-08 01:05:38.331664 | orchestrator | Wednesday 08 April 2026 01:01:58 +0000 (0:00:02.088) 0:00:43.097 ******* 2026-04-08 01:05:38.331674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.331695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.331700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.331707 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-08 01:05:38.331711 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-08 01:05:38.331716 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-08 01:05:38.331720 | orchestrator | 2026-04-08 01:05:38.331723 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-08 01:05:38.331726 | orchestrator | Wednesday 08 April 2026 01:02:01 +0000 (0:00:02.933) 0:00:46.031 ******* 2026-04-08 01:05:38.331730 | orchestrator | [WARNING]: Skipped 2026-04-08 01:05:38.331734 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-08 01:05:38.331739 | orchestrator | due to this access issue: 2026-04-08 01:05:38.331745 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-08 01:05:38.331750 | orchestrator | a directory 2026-04-08 01:05:38.331753 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 01:05:38.331757 | orchestrator | 2026-04-08 01:05:38.331773 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-08 01:05:38.331808 | orchestrator | Wednesday 08 April 2026 01:02:02 +0000 (0:00:00.892) 0:00:46.923 ******* 2026-04-08 01:05:38.331818 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 01:05:38.331830 | orchestrator | 2026-04-08 01:05:38.331835 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-08 01:05:38.331841 | orchestrator | Wednesday 08 April 2026 01:02:03 +0000 (0:00:01.269) 0:00:48.193 ******* 2026-04-08 01:05:38.331848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.331854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.331863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.331868 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-08 01:05:38.331884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-08 01:05:38.331892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-08 01:05:38.331896 | orchestrator | 2026-04-08 01:05:38.331900 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-08 01:05:38.331904 | orchestrator | Wednesday 08 April 2026 01:02:07 +0000 (0:00:03.655) 0:00:51.848 ******* 2026-04-08 01:05:38.331908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.331912 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.331919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.331924 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.331928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.331943 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.331954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.331958 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.331963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.331971 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.331976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.331980 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.331984 | orchestrator | 2026-04-08 01:05:38.331988 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-08 01:05:38.331991 | orchestrator | Wednesday 08 April 2026 01:02:09 +0000 (0:00:02.224) 0:00:54.073 ******* 2026-04-08 01:05:38.331997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.332001 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.332016 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.332020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.332024 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.332028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332032 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332045 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332056 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332060 | orchestrator | 2026-04-08 01:05:38.332065 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-08 01:05:38.332069 | orchestrator | Wednesday 08 April 2026 01:02:12 +0000 (0:00:02.555) 0:00:56.628 ******* 2026-04-08 01:05:38.332073 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332077 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.332081 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.332085 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332089 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332093 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332097 | orchestrator | 2026-04-08 01:05:38.332101 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-08 01:05:38.332109 | orchestrator | Wednesday 08 April 2026 01:02:15 +0000 (0:00:02.733) 0:00:59.362 ******* 2026-04-08 01:05:38.332113 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332116 | orchestrator | 2026-04-08 01:05:38.332120 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-08 01:05:38.332124 | orchestrator | Wednesday 08 April 2026 01:02:15 +0000 (0:00:00.230) 0:00:59.592 ******* 2026-04-08 01:05:38.332128 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332132 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.332136 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.332140 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332144 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332148 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332151 | orchestrator | 2026-04-08 01:05:38.332156 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-08 01:05:38.332159 | orchestrator | Wednesday 08 April 2026 01:02:15 +0000 (0:00:00.485) 0:01:00.078 ******* 2026-04-08 01:05:38.332164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.332168 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.332172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.332179 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.332189 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.332195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332200 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332208 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332216 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332220 | orchestrator | 2026-04-08 01:05:38.332223 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-08 01:05:38.332227 | orchestrator | Wednesday 08 April 2026 01:02:17 +0000 (0:00:02.141) 0:01:02.219 ******* 2026-04-08 01:05:38.332230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.332239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.332245 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-08 01:05:38.332249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.332253 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-08 01:05:38.332259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-08 01:05:38.332262 | orchestrator | 2026-04-08 01:05:38.332266 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-08 01:05:38.332271 | orchestrator | Wednesday 08 April 2026 01:02:21 +0000 (0:00:03.139) 0:01:05.359 ******* 2026-04-08 01:05:38.332274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.332281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.332284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.332288 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-08 01:05:38.332296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-08 01:05:38.332299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-08 01:05:38.332303 | orchestrator | 2026-04-08 01:05:38.332306 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-08 01:05:38.332310 | orchestrator | Wednesday 08 April 2026 01:02:28 +0000 (0:00:07.042) 0:01:12.401 ******* 2026-04-08 01:05:38.332316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.332320 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.332329 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.332332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332336 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.332344 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.332348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332351 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332360 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332364 | orchestrator | 2026-04-08 01:05:38.332367 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-08 01:05:38.332371 | orchestrator | Wednesday 08 April 2026 01:02:30 +0000 (0:00:02.470) 0:01:14.872 ******* 2026-04-08 01:05:38.332374 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:38.332377 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332383 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332386 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332389 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:05:38.332393 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:05:38.332396 | orchestrator | 2026-04-08 01:05:38.332399 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-08 01:05:38.332403 | orchestrator | Wednesday 08 April 2026 01:02:34 +0000 (0:00:03.667) 0:01:18.539 ******* 2026-04-08 01:05:38.332406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332410 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332418 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332425 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.332436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.332441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.332445 | orchestrator | 2026-04-08 01:05:38.332448 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-08 01:05:38.332452 | orchestrator | Wednesday 08 April 2026 01:02:38 +0000 (0:00:03.797) 0:01:22.337 ******* 2026-04-08 01:05:38.332455 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.332458 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332462 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.332465 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332468 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332472 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332475 | orchestrator | 2026-04-08 01:05:38.332478 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-08 01:05:38.332484 | orchestrator | Wednesday 08 April 2026 01:02:41 +0000 (0:00:03.432) 0:01:25.769 ******* 2026-04-08 01:05:38.332488 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332491 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.332494 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.332498 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332501 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332504 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332508 | orchestrator | 2026-04-08 01:05:38.332511 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-08 01:05:38.332514 | orchestrator | Wednesday 08 April 2026 01:02:44 +0000 (0:00:02.840) 0:01:28.610 ******* 2026-04-08 01:05:38.332518 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332521 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.332524 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.332528 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332531 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332534 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332538 | orchestrator | 2026-04-08 01:05:38.332541 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-08 01:05:38.332544 | orchestrator | Wednesday 08 April 2026 01:02:46 +0000 (0:00:01.948) 0:01:30.559 ******* 2026-04-08 01:05:38.332548 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.332551 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332554 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.332560 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332563 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332566 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332570 | orchestrator | 2026-04-08 01:05:38.332573 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-08 01:05:38.332576 | orchestrator | Wednesday 08 April 2026 01:02:48 +0000 (0:00:02.305) 0:01:32.864 ******* 2026-04-08 01:05:38.332580 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.332583 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.332586 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332590 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332596 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332599 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332602 | orchestrator | 2026-04-08 01:05:38.332606 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-08 01:05:38.332611 | orchestrator | Wednesday 08 April 2026 01:02:50 +0000 (0:00:02.457) 0:01:35.321 ******* 2026-04-08 01:05:38.332617 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332621 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.332624 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332629 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.332634 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332640 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332645 | orchestrator | 2026-04-08 01:05:38.332650 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-08 01:05:38.332656 | orchestrator | Wednesday 08 April 2026 01:02:53 +0000 (0:00:03.008) 0:01:38.329 ******* 2026-04-08 01:05:38.332661 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-08 01:05:38.332666 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.332671 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-08 01:05:38.332676 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.332681 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-08 01:05:38.332686 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332692 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-08 01:05:38.332697 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332703 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-08 01:05:38.332708 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332713 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-08 01:05:38.332718 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332724 | orchestrator | 2026-04-08 01:05:38.332729 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-08 01:05:38.332735 | orchestrator | Wednesday 08 April 2026 01:02:57 +0000 (0:00:03.690) 0:01:42.020 ******* 2026-04-08 01:05:38.332741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.332750 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.332759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332764 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.332780 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.332804 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.332809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332815 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332830 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332836 | orchestrator | 2026-04-08 01:05:38.332841 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-08 01:05:38.332849 | orchestrator | Wednesday 08 April 2026 01:03:01 +0000 (0:00:03.582) 0:01:45.604 ******* 2026-04-08 01:05:38.332855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.332860 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 2026-04-08 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:38.332877 | orchestrator | '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.332883 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.332889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.332894 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.332900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332909 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332922 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.332933 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332938 | orchestrator | 2026-04-08 01:05:38.332944 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-08 01:05:38.332952 | orchestrator | Wednesday 08 April 2026 01:03:03 +0000 (0:00:02.292) 0:01:47.896 ******* 2026-04-08 01:05:38.332957 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.332963 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.332968 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.332974 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.332979 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.332984 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.332990 | orchestrator | 2026-04-08 01:05:38.332995 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-08 01:05:38.333000 | orchestrator | Wednesday 08 April 2026 01:03:06 +0000 (0:00:03.078) 0:01:50.975 ******* 2026-04-08 01:05:38.333006 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.333011 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.333016 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.333021 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:05:38.333027 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:05:38.333032 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:05:38.333037 | orchestrator | 2026-04-08 01:05:38.333043 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-08 01:05:38.333048 | orchestrator | Wednesday 08 April 2026 01:03:11 +0000 (0:00:04.747) 0:01:55.722 ******* 2026-04-08 01:05:38.333054 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.333059 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.333064 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.333069 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.333075 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.333084 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.333089 | orchestrator | 2026-04-08 01:05:38.333094 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-08 01:05:38.333100 | orchestrator | Wednesday 08 April 2026 01:03:14 +0000 (0:00:03.197) 0:01:58.920 ******* 2026-04-08 01:05:38.333105 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.333110 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.333116 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.333121 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.333126 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.333132 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.333137 | orchestrator | 2026-04-08 01:05:38.333142 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-08 01:05:38.333148 | orchestrator | Wednesday 08 April 2026 01:03:16 +0000 (0:00:02.374) 0:02:01.295 ******* 2026-04-08 01:05:38.333153 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.333158 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.333164 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.333169 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.333174 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.333180 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.333185 | orchestrator | 2026-04-08 01:05:38.333191 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-08 01:05:38.333196 | orchestrator | Wednesday 08 April 2026 01:03:19 +0000 (0:00:02.920) 0:02:04.215 ******* 2026-04-08 01:05:38.333201 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.333207 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.333212 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.333217 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.333223 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.333228 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.333233 | orchestrator | 2026-04-08 01:05:38.333239 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-08 01:05:38.333244 | orchestrator | Wednesday 08 April 2026 01:03:22 +0000 (0:00:02.156) 0:02:06.371 ******* 2026-04-08 01:05:38.333249 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.333255 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.333260 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.333265 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.333271 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.333276 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.333281 | orchestrator | 2026-04-08 01:05:38.333287 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-08 01:05:38.333294 | orchestrator | Wednesday 08 April 2026 01:03:24 +0000 (0:00:02.105) 0:02:08.477 ******* 2026-04-08 01:05:38.333300 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.333305 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.333311 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.333316 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.333321 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.333327 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.333332 | orchestrator | 2026-04-08 01:05:38.333338 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-08 01:05:38.333343 | orchestrator | Wednesday 08 April 2026 01:03:26 +0000 (0:00:01.922) 0:02:10.399 ******* 2026-04-08 01:05:38.333348 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.333353 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.333359 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.333364 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.333369 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.333375 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.333380 | orchestrator | 2026-04-08 01:05:38.333385 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-08 01:05:38.333396 | orchestrator | Wednesday 08 April 2026 01:03:28 +0000 (0:00:02.325) 0:02:12.725 ******* 2026-04-08 01:05:38.333401 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-08 01:05:38.333407 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.333413 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-08 01:05:38.333418 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.333424 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-08 01:05:38.333429 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.333434 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-08 01:05:38.333442 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.333448 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-08 01:05:38.333453 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.333459 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-08 01:05:38.333464 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.333470 | orchestrator | 2026-04-08 01:05:38.333475 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-08 01:05:38.333480 | orchestrator | Wednesday 08 April 2026 01:03:31 +0000 (0:00:03.142) 0:02:15.867 ******* 2026-04-08 01:05:38.333486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.333491 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.333497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.333502 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.333511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-08 01:05:38.333520 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.333526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.333532 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.333540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.333546 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.333552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 01:05:38.333557 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.333563 | orchestrator | 2026-04-08 01:05:38.333568 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-08 01:05:38.333573 | orchestrator | Wednesday 08 April 2026 01:03:33 +0000 (0:00:02.132) 0:02:17.999 ******* 2026-04-08 01:05:38.333579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.333590 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-08 01:05:38.333599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.333605 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-08 01:05:38.333610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-08 01:05:38.333616 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-08 01:05:38.333628 | orchestrator | 2026-04-08 01:05:38.333634 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-08 01:05:38.333639 | orchestrator | Wednesday 08 April 2026 01:03:35 +0000 (0:00:02.069) 0:02:20.068 ******* 2026-04-08 01:05:38.333645 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:38.333650 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:38.333659 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:38.333664 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:05:38.333670 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:05:38.333676 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:05:38.333681 | orchestrator | 2026-04-08 01:05:38.333687 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-08 01:05:38.333691 | orchestrator | Wednesday 08 April 2026 01:03:36 +0000 (0:00:00.581) 0:02:20.650 ******* 2026-04-08 01:05:38.333694 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:38.333697 | orchestrator | 2026-04-08 01:05:38.333701 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-08 01:05:38.333704 | orchestrator | Wednesday 08 April 2026 01:03:38 +0000 (0:00:02.394) 0:02:23.044 ******* 2026-04-08 01:05:38.333707 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:38.333710 | orchestrator | 2026-04-08 01:05:38.333714 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-08 01:05:38.333717 | orchestrator | Wednesday 08 April 2026 01:03:41 +0000 (0:00:02.492) 0:02:25.537 ******* 2026-04-08 01:05:38.333720 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:38.333724 | orchestrator | 2026-04-08 01:05:38.333727 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-08 01:05:38.333730 | orchestrator | Wednesday 08 April 2026 01:04:20 +0000 (0:00:39.177) 0:03:04.714 ******* 2026-04-08 01:05:38.333734 | orchestrator | 2026-04-08 01:05:38.333737 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-08 01:05:38.333740 | orchestrator | Wednesday 08 April 2026 01:04:20 +0000 (0:00:00.072) 0:03:04.786 ******* 2026-04-08 01:05:38.333743 | orchestrator | 2026-04-08 01:05:38.333747 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-08 01:05:38.333750 | orchestrator | Wednesday 08 April 2026 01:04:20 +0000 (0:00:00.075) 0:03:04.862 ******* 2026-04-08 01:05:38.333753 | orchestrator | 2026-04-08 01:05:38.333757 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-08 01:05:38.333765 | orchestrator | Wednesday 08 April 2026 01:04:20 +0000 (0:00:00.097) 0:03:04.960 ******* 2026-04-08 01:05:38.333770 | orchestrator | 2026-04-08 01:05:38.333775 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-08 01:05:38.333841 | orchestrator | Wednesday 08 April 2026 01:04:20 +0000 (0:00:00.075) 0:03:05.036 ******* 2026-04-08 01:05:38.333850 | orchestrator | 2026-04-08 01:05:38.333854 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-08 01:05:38.333857 | orchestrator | Wednesday 08 April 2026 01:04:20 +0000 (0:00:00.066) 0:03:05.102 ******* 2026-04-08 01:05:38.333861 | orchestrator | 2026-04-08 01:05:38.333864 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-08 01:05:38.333867 | orchestrator | Wednesday 08 April 2026 01:04:20 +0000 (0:00:00.076) 0:03:05.179 ******* 2026-04-08 01:05:38.333871 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:38.333874 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:05:38.333877 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:05:38.333881 | orchestrator | 2026-04-08 01:05:38.333884 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-08 01:05:38.333887 | orchestrator | Wednesday 08 April 2026 01:04:44 +0000 (0:00:23.862) 0:03:29.041 ******* 2026-04-08 01:05:38.333891 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:05:38.333894 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:05:38.333901 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:05:38.333904 | orchestrator | 2026-04-08 01:05:38.333908 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:05:38.333911 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-08 01:05:38.333915 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-08 01:05:38.333919 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-08 01:05:38.333922 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-08 01:05:38.333926 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-08 01:05:38.333929 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-08 01:05:38.333932 | orchestrator | 2026-04-08 01:05:38.333936 | orchestrator | 2026-04-08 01:05:38.333939 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:05:38.333942 | orchestrator | Wednesday 08 April 2026 01:05:37 +0000 (0:00:52.617) 0:04:21.658 ******* 2026-04-08 01:05:38.333946 | orchestrator | =============================================================================== 2026-04-08 01:05:38.333949 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 52.62s 2026-04-08 01:05:38.333952 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.18s 2026-04-08 01:05:38.333956 | orchestrator | neutron : Restart neutron-server container ----------------------------- 23.86s 2026-04-08 01:05:38.333959 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.22s 2026-04-08 01:05:38.333962 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.34s 2026-04-08 01:05:38.333965 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.04s 2026-04-08 01:05:38.333972 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.75s 2026-04-08 01:05:38.333975 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.42s 2026-04-08 01:05:38.333978 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.80s 2026-04-08 01:05:38.333982 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.72s 2026-04-08 01:05:38.333985 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 3.69s 2026-04-08 01:05:38.333988 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.67s 2026-04-08 01:05:38.333992 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.66s 2026-04-08 01:05:38.333995 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 3.58s 2026-04-08 01:05:38.333998 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.58s 2026-04-08 01:05:38.334001 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 3.43s 2026-04-08 01:05:38.334005 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.29s 2026-04-08 01:05:38.334008 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.20s 2026-04-08 01:05:38.334033 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.14s 2026-04-08 01:05:38.334038 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.14s 2026-04-08 01:05:41.376951 | orchestrator | 2026-04-08 01:05:41 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:41.379559 | orchestrator | 2026-04-08 01:05:41 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:41.380017 | orchestrator | 2026-04-08 01:05:41 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:05:41.382303 | orchestrator | 2026-04-08 01:05:41 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:05:41.382352 | orchestrator | 2026-04-08 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:44.426740 | orchestrator | 2026-04-08 01:05:44 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:44.427346 | orchestrator | 2026-04-08 01:05:44 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:44.428319 | orchestrator | 2026-04-08 01:05:44 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state STARTED 2026-04-08 01:05:44.430067 | orchestrator | 2026-04-08 01:05:44 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:05:44.430099 | orchestrator | 2026-04-08 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:47.475691 | orchestrator | 2026-04-08 01:05:47 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:47.477394 | orchestrator | 2026-04-08 01:05:47 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:47.480523 | orchestrator | 2026-04-08 01:05:47 | INFO  | Task 40544abb-2ca5-48ec-ae9a-59d070a15025 is in state STARTED 2026-04-08 01:05:47.481608 | orchestrator | 2026-04-08 01:05:47 | INFO  | Task 35dd5173-9c8e-4e9e-a57c-a5e41852c1e6 is in state SUCCESS 2026-04-08 01:05:47.484707 | orchestrator | 2026-04-08 01:05:47.484766 | orchestrator | 2026-04-08 01:05:47.484774 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 01:05:47.484831 | orchestrator | 2026-04-08 01:05:47.484837 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 01:05:47.484849 | orchestrator | Wednesday 08 April 2026 01:04:31 +0000 (0:00:00.298) 0:00:00.298 ******* 2026-04-08 01:05:47.484857 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:05:47.484863 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:05:47.484868 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:05:47.484874 | orchestrator | 2026-04-08 01:05:47.484879 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 01:05:47.484884 | orchestrator | Wednesday 08 April 2026 01:04:32 +0000 (0:00:00.335) 0:00:00.633 ******* 2026-04-08 01:05:47.484890 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-08 01:05:47.484896 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-08 01:05:47.484902 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-08 01:05:47.484907 | orchestrator | 2026-04-08 01:05:47.484910 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-08 01:05:47.484914 | orchestrator | 2026-04-08 01:05:47.484919 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-08 01:05:47.484925 | orchestrator | Wednesday 08 April 2026 01:04:32 +0000 (0:00:00.344) 0:00:00.978 ******* 2026-04-08 01:05:47.484931 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:05:47.484936 | orchestrator | 2026-04-08 01:05:47.484942 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-08 01:05:47.484949 | orchestrator | Wednesday 08 April 2026 01:04:33 +0000 (0:00:00.615) 0:00:01.594 ******* 2026-04-08 01:05:47.484957 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-08 01:05:47.484962 | orchestrator | 2026-04-08 01:05:47.484968 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-08 01:05:47.484985 | orchestrator | Wednesday 08 April 2026 01:04:36 +0000 (0:00:03.826) 0:00:05.420 ******* 2026-04-08 01:05:47.485004 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-08 01:05:47.485012 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-08 01:05:47.485017 | orchestrator | 2026-04-08 01:05:47.485022 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-08 01:05:47.485027 | orchestrator | Wednesday 08 April 2026 01:04:43 +0000 (0:00:06.487) 0:00:11.908 ******* 2026-04-08 01:05:47.485032 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-08 01:05:47.485037 | orchestrator | 2026-04-08 01:05:47.485042 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-08 01:05:47.485047 | orchestrator | Wednesday 08 April 2026 01:04:47 +0000 (0:00:03.901) 0:00:15.809 ******* 2026-04-08 01:05:47.485053 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-08 01:05:47.485059 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-08 01:05:47.485065 | orchestrator | 2026-04-08 01:05:47.485068 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-08 01:05:47.485072 | orchestrator | Wednesday 08 April 2026 01:04:52 +0000 (0:00:04.869) 0:00:20.679 ******* 2026-04-08 01:05:47.485075 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-08 01:05:47.485078 | orchestrator | 2026-04-08 01:05:47.485081 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-08 01:05:47.485085 | orchestrator | Wednesday 08 April 2026 01:04:55 +0000 (0:00:03.243) 0:00:23.923 ******* 2026-04-08 01:05:47.485088 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-08 01:05:47.485091 | orchestrator | 2026-04-08 01:05:47.485094 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-08 01:05:47.485098 | orchestrator | Wednesday 08 April 2026 01:04:59 +0000 (0:00:03.825) 0:00:27.749 ******* 2026-04-08 01:05:47.485104 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:47.485109 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:47.485114 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:47.485120 | orchestrator | 2026-04-08 01:05:47.485125 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-08 01:05:47.485131 | orchestrator | Wednesday 08 April 2026 01:04:59 +0000 (0:00:00.340) 0:00:28.090 ******* 2026-04-08 01:05:47.485137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 01:05:47.485159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 01:05:47.485176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 01:05:47.485182 | orchestrator | 2026-04-08 01:05:47.485188 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-08 01:05:47.485193 | orchestrator | Wednesday 08 April 2026 01:05:01 +0000 (0:00:02.025) 0:00:30.116 ******* 2026-04-08 01:05:47.485196 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:47.485199 | orchestrator | 2026-04-08 01:05:47.485202 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-08 01:05:47.485206 | orchestrator | Wednesday 08 April 2026 01:05:01 +0000 (0:00:00.116) 0:00:30.232 ******* 2026-04-08 01:05:47.485209 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:47.485212 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:47.485215 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:47.485219 | orchestrator | 2026-04-08 01:05:47.485222 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-08 01:05:47.485225 | orchestrator | Wednesday 08 April 2026 01:05:02 +0000 (0:00:00.346) 0:00:30.579 ******* 2026-04-08 01:05:47.485228 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:05:47.485232 | orchestrator | 2026-04-08 01:05:47.485235 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-08 01:05:47.485238 | orchestrator | Wednesday 08 April 2026 01:05:03 +0000 (0:00:01.118) 0:00:31.698 ******* 2026-04-08 01:05:47.485241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 01:05:47.485249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 01:05:47.485255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 01:05:47.485259 | orchestrator | 2026-04-08 01:05:47.485263 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-08 01:05:47.485267 | orchestrator | Wednesday 08 April 2026 01:05:04 +0000 (0:00:01.513) 0:00:33.212 ******* 2026-04-08 01:05:47.485273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-08 01:05:47.485279 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:47.485286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-08 01:05:47.485291 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:47.485298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-08 01:05:47.485305 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:47.485309 | orchestrator | 2026-04-08 01:05:47.485313 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-08 01:05:47.485317 | orchestrator | Wednesday 08 April 2026 01:05:05 +0000 (0:00:00.918) 0:00:34.131 ******* 2026-04-08 01:05:47.485321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-08 01:05:47.485325 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:47.485332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-08 01:05:47.485336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-08 01:05:47.485340 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:47.485344 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:47.485348 | orchestrator | 2026-04-08 01:05:47.485351 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-08 01:05:47.485355 | orchestrator | Wednesday 08 April 2026 01:05:06 +0000 (0:00:00.747) 0:00:34.879 ******* 2026-04-08 01:05:47.485362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 01:05:47.485369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 01:05:47.485375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 01:05:47.485379 | orchestrator | 2026-04-08 01:05:47.485383 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-08 01:05:47.485387 | orchestrator | Wednesday 08 April 2026 01:05:07 +0000 (0:00:01.485) 0:00:36.364 ******* 2026-04-08 01:05:47.485391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 01:05:47.485395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 01:05:47.485405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 01:05:47.485409 | orchestrator | 2026-04-08 01:05:47.485413 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-08 01:05:47.485417 | orchestrator | Wednesday 08 April 2026 01:05:10 +0000 (0:00:02.646) 0:00:39.011 ******* 2026-04-08 01:05:47.485422 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-08 01:05:47.485428 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-08 01:05:47.485433 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-08 01:05:47.485438 | orchestrator | 2026-04-08 01:05:47.485444 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-08 01:05:47.485447 | orchestrator | Wednesday 08 April 2026 01:05:12 +0000 (0:00:01.763) 0:00:40.774 ******* 2026-04-08 01:05:47.485451 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:47.485456 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:05:47.485461 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:05:47.485466 | orchestrator | 2026-04-08 01:05:47.485474 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-08 01:05:47.485480 | orchestrator | Wednesday 08 April 2026 01:05:13 +0000 (0:00:01.356) 0:00:42.131 ******* 2026-04-08 01:05:47.485486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-08 01:05:47.485492 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:05:47.485498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-08 01:05:47.485506 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:05:47.485514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-08 01:05:47.485518 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:05:47.485521 | orchestrator | 2026-04-08 01:05:47.485525 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-08 01:05:47.485529 | orchestrator | Wednesday 08 April 2026 01:05:14 +0000 (0:00:00.812) 0:00:42.943 ******* 2026-04-08 01:05:47.485533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 01:05:47.485540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 01:05:47.485546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-08 01:05:47.485558 | orchestrator | 2026-04-08 01:05:47.485563 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-08 01:05:47.485569 | orchestrator | Wednesday 08 April 2026 01:05:15 +0000 (0:00:01.127) 0:00:44.070 ******* 2026-04-08 01:05:47.485574 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:47.485579 | orchestrator | 2026-04-08 01:05:47.485585 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-08 01:05:47.485592 | orchestrator | Wednesday 08 April 2026 01:05:17 +0000 (0:00:02.245) 0:00:46.316 ******* 2026-04-08 01:05:47.485597 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:47.485601 | orchestrator | 2026-04-08 01:05:47.485605 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-08 01:05:47.485609 | orchestrator | Wednesday 08 April 2026 01:05:19 +0000 (0:00:02.088) 0:00:48.405 ******* 2026-04-08 01:05:47.485613 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:47.485618 | orchestrator | 2026-04-08 01:05:47.485623 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-08 01:05:47.485628 | orchestrator | Wednesday 08 April 2026 01:05:33 +0000 (0:00:13.980) 0:01:02.386 ******* 2026-04-08 01:05:47.485633 | orchestrator | 2026-04-08 01:05:47.485638 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-08 01:05:47.485643 | orchestrator | Wednesday 08 April 2026 01:05:34 +0000 (0:00:00.064) 0:01:02.450 ******* 2026-04-08 01:05:47.485648 | orchestrator | 2026-04-08 01:05:47.485657 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-08 01:05:47.485663 | orchestrator | Wednesday 08 April 2026 01:05:34 +0000 (0:00:00.068) 0:01:02.519 ******* 2026-04-08 01:05:47.485667 | orchestrator | 2026-04-08 01:05:47.485673 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-08 01:05:47.485678 | orchestrator | Wednesday 08 April 2026 01:05:34 +0000 (0:00:00.068) 0:01:02.587 ******* 2026-04-08 01:05:47.485684 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:05:47.485689 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:05:47.485694 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:05:47.485700 | orchestrator | 2026-04-08 01:05:47.485705 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:05:47.485710 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-08 01:05:47.485717 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-08 01:05:47.485723 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-08 01:05:47.485727 | orchestrator | 2026-04-08 01:05:47.485730 | orchestrator | 2026-04-08 01:05:47.485733 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:05:47.485738 | orchestrator | Wednesday 08 April 2026 01:05:44 +0000 (0:00:10.219) 0:01:12.806 ******* 2026-04-08 01:05:47.485743 | orchestrator | =============================================================================== 2026-04-08 01:05:47.485747 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.98s 2026-04-08 01:05:47.485757 | orchestrator | placement : Restart placement-api container ---------------------------- 10.22s 2026-04-08 01:05:47.485766 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.49s 2026-04-08 01:05:47.485770 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.87s 2026-04-08 01:05:47.485774 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.90s 2026-04-08 01:05:47.485792 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.83s 2026-04-08 01:05:47.485796 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.83s 2026-04-08 01:05:47.485799 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.24s 2026-04-08 01:05:47.485802 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.65s 2026-04-08 01:05:47.485806 | orchestrator | placement : Creating placement databases -------------------------------- 2.25s 2026-04-08 01:05:47.485809 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.09s 2026-04-08 01:05:47.485812 | orchestrator | placement : Ensuring config directories exist --------------------------- 2.03s 2026-04-08 01:05:47.485815 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.76s 2026-04-08 01:05:47.485819 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.51s 2026-04-08 01:05:47.485822 | orchestrator | placement : Copying over config.json files for services ----------------- 1.49s 2026-04-08 01:05:47.485825 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.36s 2026-04-08 01:05:47.485828 | orchestrator | placement : Check placement containers ---------------------------------- 1.13s 2026-04-08 01:05:47.485831 | orchestrator | placement : include_tasks ----------------------------------------------- 1.12s 2026-04-08 01:05:47.485835 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.92s 2026-04-08 01:05:47.485838 | orchestrator | placement : Copying over existing policy file --------------------------- 0.81s 2026-04-08 01:05:47.485842 | orchestrator | 2026-04-08 01:05:47 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:05:47.485845 | orchestrator | 2026-04-08 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:50.531669 | orchestrator | 2026-04-08 01:05:50 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:50.534754 | orchestrator | 2026-04-08 01:05:50 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:50.534858 | orchestrator | 2026-04-08 01:05:50 | INFO  | Task 40544abb-2ca5-48ec-ae9a-59d070a15025 is in state SUCCESS 2026-04-08 01:05:50.534867 | orchestrator | 2026-04-08 01:05:50 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:05:50.534874 | orchestrator | 2026-04-08 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:53.575330 | orchestrator | 2026-04-08 01:05:53 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:53.575845 | orchestrator | 2026-04-08 01:05:53 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:53.576648 | orchestrator | 2026-04-08 01:05:53 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:05:53.577608 | orchestrator | 2026-04-08 01:05:53 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:05:53.577630 | orchestrator | 2026-04-08 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:56.608854 | orchestrator | 2026-04-08 01:05:56 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:56.609091 | orchestrator | 2026-04-08 01:05:56 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:56.610167 | orchestrator | 2026-04-08 01:05:56 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:05:56.610627 | orchestrator | 2026-04-08 01:05:56 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:05:56.610668 | orchestrator | 2026-04-08 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:59.639943 | orchestrator | 2026-04-08 01:05:59 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:05:59.643520 | orchestrator | 2026-04-08 01:05:59 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:05:59.643902 | orchestrator | 2026-04-08 01:05:59 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:05:59.644659 | orchestrator | 2026-04-08 01:05:59 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:05:59.644698 | orchestrator | 2026-04-08 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:02.672915 | orchestrator | 2026-04-08 01:06:02 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:02.673709 | orchestrator | 2026-04-08 01:06:02 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:02.677536 | orchestrator | 2026-04-08 01:06:02 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:02.677587 | orchestrator | 2026-04-08 01:06:02 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:02.677593 | orchestrator | 2026-04-08 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:05.725463 | orchestrator | 2026-04-08 01:06:05 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:05.726003 | orchestrator | 2026-04-08 01:06:05 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:05.726224 | orchestrator | 2026-04-08 01:06:05 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:05.727192 | orchestrator | 2026-04-08 01:06:05 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:05.728100 | orchestrator | 2026-04-08 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:08.776920 | orchestrator | 2026-04-08 01:06:08 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:08.779421 | orchestrator | 2026-04-08 01:06:08 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:08.782339 | orchestrator | 2026-04-08 01:06:08 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:08.784038 | orchestrator | 2026-04-08 01:06:08 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:08.784082 | orchestrator | 2026-04-08 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:11.826101 | orchestrator | 2026-04-08 01:06:11 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:11.826156 | orchestrator | 2026-04-08 01:06:11 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:11.826786 | orchestrator | 2026-04-08 01:06:11 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:11.827642 | orchestrator | 2026-04-08 01:06:11 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:11.827698 | orchestrator | 2026-04-08 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:14.871844 | orchestrator | 2026-04-08 01:06:14 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:14.873293 | orchestrator | 2026-04-08 01:06:14 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:14.875201 | orchestrator | 2026-04-08 01:06:14 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:14.876589 | orchestrator | 2026-04-08 01:06:14 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:14.876633 | orchestrator | 2026-04-08 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:17.920190 | orchestrator | 2026-04-08 01:06:17 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:17.920956 | orchestrator | 2026-04-08 01:06:17 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:17.923675 | orchestrator | 2026-04-08 01:06:17 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:17.923714 | orchestrator | 2026-04-08 01:06:17 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:17.923721 | orchestrator | 2026-04-08 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:20.959420 | orchestrator | 2026-04-08 01:06:20 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:20.960948 | orchestrator | 2026-04-08 01:06:20 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:20.962428 | orchestrator | 2026-04-08 01:06:20 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:20.962462 | orchestrator | 2026-04-08 01:06:20 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:20.965809 | orchestrator | 2026-04-08 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:24.002574 | orchestrator | 2026-04-08 01:06:24 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:24.003830 | orchestrator | 2026-04-08 01:06:24 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:24.005011 | orchestrator | 2026-04-08 01:06:24 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:24.006147 | orchestrator | 2026-04-08 01:06:24 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:24.006183 | orchestrator | 2026-04-08 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:27.060105 | orchestrator | 2026-04-08 01:06:27 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:27.060228 | orchestrator | 2026-04-08 01:06:27 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:27.060242 | orchestrator | 2026-04-08 01:06:27 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:27.060255 | orchestrator | 2026-04-08 01:06:27 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:27.060261 | orchestrator | 2026-04-08 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:30.094417 | orchestrator | 2026-04-08 01:06:30 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:30.095408 | orchestrator | 2026-04-08 01:06:30 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:30.097321 | orchestrator | 2026-04-08 01:06:30 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:30.098204 | orchestrator | 2026-04-08 01:06:30 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:30.098285 | orchestrator | 2026-04-08 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:33.132099 | orchestrator | 2026-04-08 01:06:33 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:33.134627 | orchestrator | 2026-04-08 01:06:33 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:33.136807 | orchestrator | 2026-04-08 01:06:33 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:33.138820 | orchestrator | 2026-04-08 01:06:33 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:33.139152 | orchestrator | 2026-04-08 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:36.181541 | orchestrator | 2026-04-08 01:06:36 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:36.183093 | orchestrator | 2026-04-08 01:06:36 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:36.185323 | orchestrator | 2026-04-08 01:06:36 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:36.187257 | orchestrator | 2026-04-08 01:06:36 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:36.187298 | orchestrator | 2026-04-08 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:39.250228 | orchestrator | 2026-04-08 01:06:39 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:39.252736 | orchestrator | 2026-04-08 01:06:39 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:39.256191 | orchestrator | 2026-04-08 01:06:39 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:39.258856 | orchestrator | 2026-04-08 01:06:39 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:39.258905 | orchestrator | 2026-04-08 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:42.304625 | orchestrator | 2026-04-08 01:06:42 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:42.306845 | orchestrator | 2026-04-08 01:06:42 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:42.308939 | orchestrator | 2026-04-08 01:06:42 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:42.312846 | orchestrator | 2026-04-08 01:06:42 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:42.312903 | orchestrator | 2026-04-08 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:45.373968 | orchestrator | 2026-04-08 01:06:45 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:45.377989 | orchestrator | 2026-04-08 01:06:45 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:45.379711 | orchestrator | 2026-04-08 01:06:45 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:45.380247 | orchestrator | 2026-04-08 01:06:45 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:45.380262 | orchestrator | 2026-04-08 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:48.418999 | orchestrator | 2026-04-08 01:06:48 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:48.420055 | orchestrator | 2026-04-08 01:06:48 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:48.421101 | orchestrator | 2026-04-08 01:06:48 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:48.422113 | orchestrator | 2026-04-08 01:06:48 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:48.422138 | orchestrator | 2026-04-08 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:51.470193 | orchestrator | 2026-04-08 01:06:51 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:51.474169 | orchestrator | 2026-04-08 01:06:51 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:51.475380 | orchestrator | 2026-04-08 01:06:51 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:51.478322 | orchestrator | 2026-04-08 01:06:51 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:51.478378 | orchestrator | 2026-04-08 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:54.520313 | orchestrator | 2026-04-08 01:06:54 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:54.520367 | orchestrator | 2026-04-08 01:06:54 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:54.521263 | orchestrator | 2026-04-08 01:06:54 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:54.522052 | orchestrator | 2026-04-08 01:06:54 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:54.522095 | orchestrator | 2026-04-08 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:57.573196 | orchestrator | 2026-04-08 01:06:57 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:06:57.574511 | orchestrator | 2026-04-08 01:06:57 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state STARTED 2026-04-08 01:06:57.577529 | orchestrator | 2026-04-08 01:06:57 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:06:57.579523 | orchestrator | 2026-04-08 01:06:57 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:06:57.579571 | orchestrator | 2026-04-08 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:00.634174 | orchestrator | 2026-04-08 01:07:00 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:00.636319 | orchestrator | 2026-04-08 01:07:00 | INFO  | Task 4d648243-fbcd-4ed3-b8d1-dee94492a7ec is in state SUCCESS 2026-04-08 01:07:00.636577 | orchestrator | 2026-04-08 01:07:00.636597 | orchestrator | 2026-04-08 01:07:00.636602 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 01:07:00.636607 | orchestrator | 2026-04-08 01:07:00.636611 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 01:07:00.636616 | orchestrator | Wednesday 08 April 2026 01:05:47 +0000 (0:00:00.209) 0:00:00.209 ******* 2026-04-08 01:07:00.636621 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:07:00.636625 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:07:00.636630 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:07:00.636634 | orchestrator | 2026-04-08 01:07:00.636638 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 01:07:00.636641 | orchestrator | Wednesday 08 April 2026 01:05:48 +0000 (0:00:00.361) 0:00:00.571 ******* 2026-04-08 01:07:00.636646 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-04-08 01:07:00.636650 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-04-08 01:07:00.636654 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-04-08 01:07:00.636658 | orchestrator | 2026-04-08 01:07:00.636662 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-04-08 01:07:00.636686 | orchestrator | 2026-04-08 01:07:00.636690 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-04-08 01:07:00.636694 | orchestrator | Wednesday 08 April 2026 01:05:48 +0000 (0:00:00.478) 0:00:01.049 ******* 2026-04-08 01:07:00.636698 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:07:00.636702 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:07:00.636750 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:07:00.636757 | orchestrator | 2026-04-08 01:07:00.636764 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:07:00.636771 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 01:07:00.636792 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 01:07:00.636799 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 01:07:00.636805 | orchestrator | 2026-04-08 01:07:00.636809 | orchestrator | 2026-04-08 01:07:00.636813 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:07:00.636816 | orchestrator | Wednesday 08 April 2026 01:05:49 +0000 (0:00:00.895) 0:00:01.945 ******* 2026-04-08 01:07:00.636820 | orchestrator | =============================================================================== 2026-04-08 01:07:00.636824 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.90s 2026-04-08 01:07:00.636828 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2026-04-08 01:07:00.636832 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-04-08 01:07:00.636836 | orchestrator | 2026-04-08 01:07:00.638111 | orchestrator | 2026-04-08 01:07:00.638168 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 01:07:00.638180 | orchestrator | 2026-04-08 01:07:00.638186 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 01:07:00.638193 | orchestrator | Wednesday 08 April 2026 01:05:09 +0000 (0:00:00.380) 0:00:00.380 ******* 2026-04-08 01:07:00.638200 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:07:00.638207 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:07:00.638213 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:07:00.638219 | orchestrator | 2026-04-08 01:07:00.638226 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 01:07:00.638232 | orchestrator | Wednesday 08 April 2026 01:05:09 +0000 (0:00:00.324) 0:00:00.705 ******* 2026-04-08 01:07:00.638239 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-08 01:07:00.638246 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-08 01:07:00.638252 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-08 01:07:00.638258 | orchestrator | 2026-04-08 01:07:00.638265 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-08 01:07:00.638272 | orchestrator | 2026-04-08 01:07:00.638279 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-08 01:07:00.638286 | orchestrator | Wednesday 08 April 2026 01:05:10 +0000 (0:00:00.377) 0:00:01.082 ******* 2026-04-08 01:07:00.638292 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:07:00.638300 | orchestrator | 2026-04-08 01:07:00.638305 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-08 01:07:00.638312 | orchestrator | Wednesday 08 April 2026 01:05:11 +0000 (0:00:01.273) 0:00:02.356 ******* 2026-04-08 01:07:00.638319 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-08 01:07:00.638325 | orchestrator | 2026-04-08 01:07:00.638331 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-08 01:07:00.638337 | orchestrator | Wednesday 08 April 2026 01:05:15 +0000 (0:00:03.531) 0:00:05.887 ******* 2026-04-08 01:07:00.638344 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-08 01:07:00.638377 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-08 01:07:00.638384 | orchestrator | 2026-04-08 01:07:00.638392 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-08 01:07:00.638398 | orchestrator | Wednesday 08 April 2026 01:05:21 +0000 (0:00:06.420) 0:00:12.308 ******* 2026-04-08 01:07:00.638404 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-08 01:07:00.638414 | orchestrator | 2026-04-08 01:07:00.638422 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-08 01:07:00.638427 | orchestrator | Wednesday 08 April 2026 01:05:24 +0000 (0:00:02.977) 0:00:15.285 ******* 2026-04-08 01:07:00.638433 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-08 01:07:00.638440 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-08 01:07:00.638447 | orchestrator | 2026-04-08 01:07:00.638453 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-08 01:07:00.638459 | orchestrator | Wednesday 08 April 2026 01:05:28 +0000 (0:00:04.066) 0:00:19.352 ******* 2026-04-08 01:07:00.638466 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-08 01:07:00.638471 | orchestrator | 2026-04-08 01:07:00.638474 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-08 01:07:00.638479 | orchestrator | Wednesday 08 April 2026 01:05:32 +0000 (0:00:03.871) 0:00:23.224 ******* 2026-04-08 01:07:00.638484 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-08 01:07:00.638490 | orchestrator | 2026-04-08 01:07:00.638496 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-08 01:07:00.638502 | orchestrator | Wednesday 08 April 2026 01:05:35 +0000 (0:00:03.486) 0:00:26.710 ******* 2026-04-08 01:07:00.638508 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:07:00.638516 | orchestrator | 2026-04-08 01:07:00.638526 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-08 01:07:00.638532 | orchestrator | Wednesday 08 April 2026 01:05:38 +0000 (0:00:03.054) 0:00:29.765 ******* 2026-04-08 01:07:00.638691 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:07:00.638702 | orchestrator | 2026-04-08 01:07:00.638782 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-08 01:07:00.638791 | orchestrator | Wednesday 08 April 2026 01:05:42 +0000 (0:00:03.688) 0:00:33.454 ******* 2026-04-08 01:07:00.638798 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:07:00.638803 | orchestrator | 2026-04-08 01:07:00.638807 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-08 01:07:00.638823 | orchestrator | Wednesday 08 April 2026 01:05:46 +0000 (0:00:03.563) 0:00:37.017 ******* 2026-04-08 01:07:00.638841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.638849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.638863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.638867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.638875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.638884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.638888 | orchestrator | 2026-04-08 01:07:00.638892 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-08 01:07:00.638896 | orchestrator | Wednesday 08 April 2026 01:05:47 +0000 (0:00:01.761) 0:00:38.778 ******* 2026-04-08 01:07:00.638904 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:07:00.638907 | orchestrator | 2026-04-08 01:07:00.638912 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-08 01:07:00.638918 | orchestrator | Wednesday 08 April 2026 01:05:48 +0000 (0:00:00.133) 0:00:38.912 ******* 2026-04-08 01:07:00.638924 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:07:00.638929 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:07:00.638935 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:07:00.638940 | orchestrator | 2026-04-08 01:07:00.638946 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-08 01:07:00.638952 | orchestrator | Wednesday 08 April 2026 01:05:48 +0000 (0:00:00.285) 0:00:39.197 ******* 2026-04-08 01:07:00.638958 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 01:07:00.638964 | orchestrator | 2026-04-08 01:07:00.638970 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-08 01:07:00.638977 | orchestrator | Wednesday 08 April 2026 01:05:49 +0000 (0:00:00.800) 0:00:39.998 ******* 2026-04-08 01:07:00.638985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.638996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.639007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.639023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.639035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.639042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.639048 | orchestrator | 2026-04-08 01:07:00.639055 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-08 01:07:00.639061 | orchestrator | Wednesday 08 April 2026 01:05:51 +0000 (0:00:02.135) 0:00:42.134 ******* 2026-04-08 01:07:00.639067 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:07:00.639074 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:07:00.639081 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:07:00.639087 | orchestrator | 2026-04-08 01:07:00.639094 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-08 01:07:00.639101 | orchestrator | Wednesday 08 April 2026 01:05:51 +0000 (0:00:00.505) 0:00:42.640 ******* 2026-04-08 01:07:00.639106 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:07:00.639110 | orchestrator | 2026-04-08 01:07:00.639114 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-08 01:07:00.639118 | orchestrator | Wednesday 08 April 2026 01:05:52 +0000 (0:00:00.516) 0:00:43.156 ******* 2026-04-08 01:07:00.639122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.639137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.639144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.639152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.639162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.639169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.639181 | orchestrator | 2026-04-08 01:07:00.639187 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-08 01:07:00.639203 | orchestrator | Wednesday 08 April 2026 01:05:54 +0000 (0:00:02.223) 0:00:45.379 ******* 2026-04-08 01:07:00.639216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-08 01:07:00.639222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:07:00.639228 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:07:00.639234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-08 01:07:00.639240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:07:00.639246 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:07:00.639255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-08 01:07:00.639272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:07:00.639278 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:07:00.639283 | orchestrator | 2026-04-08 01:07:00.639289 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-08 01:07:00.639294 | orchestrator | Wednesday 08 April 2026 01:05:55 +0000 (0:00:01.252) 0:00:46.632 ******* 2026-04-08 01:07:00.639300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-08 01:07:00.639307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:07:00.639313 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:07:00.639320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-08 01:07:00.639337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:07:00.639345 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:07:00.639358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-08 01:07:00.639365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:07:00.639372 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:07:00.639379 | orchestrator | 2026-04-08 01:07:00.639385 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-08 01:07:00.639390 | orchestrator | Wednesday 08 April 2026 01:05:57 +0000 (0:00:01.162) 0:00:47.795 ******* 2026-04-08 01:07:00.639397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.639409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.639424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.639431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.639437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.639445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.639457 | orchestrator | 2026-04-08 01:07:00.639463 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-08 01:07:00.639470 | orchestrator | Wednesday 08 April 2026 01:05:59 +0000 (0:00:02.571) 0:00:50.366 ******* 2026-04-08 01:07:00.639477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.639493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.639501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.639508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.639515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.639527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.639533 | orchestrator | 2026-04-08 01:07:00.639541 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-08 01:07:00.639552 | orchestrator | Wednesday 08 April 2026 01:06:05 +0000 (0:00:06.240) 0:00:56.607 ******* 2026-04-08 01:07:00.639563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-08 01:07:00.639571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:07:00.639578 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:07:00.639585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-08 01:07:00.639599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:07:00.639768 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:07:00.639796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-08 01:07:00.639812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:07:00.639819 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:07:00.639826 | orchestrator | 2026-04-08 01:07:00.639832 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-08 01:07:00.639839 | orchestrator | Wednesday 08 April 2026 01:06:06 +0000 (0:00:00.619) 0:00:57.226 ******* 2026-04-08 01:07:00.639846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.639854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.639869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-08 01:07:00.639879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.639891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.639898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:07:00.639904 | orchestrator | 2026-04-08 01:07:00.639910 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-08 01:07:00.639927 | orchestrator | Wednesday 08 April 2026 01:06:08 +0000 (0:00:01.725) 0:00:58.952 ******* 2026-04-08 01:07:00.639935 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:07:00.639943 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:07:00.639951 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:07:00.639957 | orchestrator | 2026-04-08 01:07:00.639965 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-08 01:07:00.639972 | orchestrator | Wednesday 08 April 2026 01:06:08 +0000 (0:00:00.319) 0:00:59.271 ******* 2026-04-08 01:07:00.639979 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:07:00.639985 | orchestrator | 2026-04-08 01:07:00.639991 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-08 01:07:00.639998 | orchestrator | Wednesday 08 April 2026 01:06:10 +0000 (0:00:02.045) 0:01:01.318 ******* 2026-04-08 01:07:00.640004 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:07:00.640010 | orchestrator | 2026-04-08 01:07:00.640016 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-08 01:07:00.640023 | orchestrator | Wednesday 08 April 2026 01:06:12 +0000 (0:00:02.116) 0:01:03.434 ******* 2026-04-08 01:07:00.640029 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:07:00.640034 | orchestrator | 2026-04-08 01:07:00.640040 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-08 01:07:00.640047 | orchestrator | Wednesday 08 April 2026 01:06:26 +0000 (0:00:13.959) 0:01:17.393 ******* 2026-04-08 01:07:00.640054 | orchestrator | 2026-04-08 01:07:00.640060 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-08 01:07:00.640066 | orchestrator | Wednesday 08 April 2026 01:06:26 +0000 (0:00:00.239) 0:01:17.633 ******* 2026-04-08 01:07:00.640072 | orchestrator | 2026-04-08 01:07:00.640078 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-08 01:07:00.640084 | orchestrator | Wednesday 08 April 2026 01:06:26 +0000 (0:00:00.062) 0:01:17.695 ******* 2026-04-08 01:07:00.640090 | orchestrator | 2026-04-08 01:07:00.640095 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-08 01:07:00.640100 | orchestrator | Wednesday 08 April 2026 01:06:26 +0000 (0:00:00.065) 0:01:17.760 ******* 2026-04-08 01:07:00.640103 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:07:00.640108 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:07:00.640112 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:07:00.640116 | orchestrator | 2026-04-08 01:07:00.640120 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-08 01:07:00.640124 | orchestrator | Wednesday 08 April 2026 01:06:45 +0000 (0:00:18.295) 0:01:36.055 ******* 2026-04-08 01:07:00.640128 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:07:00.640132 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:07:00.640135 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:07:00.640140 | orchestrator | 2026-04-08 01:07:00.640143 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:07:00.640153 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-08 01:07:00.640159 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-08 01:07:00.640163 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-08 01:07:00.640167 | orchestrator | 2026-04-08 01:07:00.640171 | orchestrator | 2026-04-08 01:07:00.640175 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:07:00.640180 | orchestrator | Wednesday 08 April 2026 01:06:58 +0000 (0:00:13.095) 0:01:49.150 ******* 2026-04-08 01:07:00.640183 | orchestrator | =============================================================================== 2026-04-08 01:07:00.640188 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.30s 2026-04-08 01:07:00.640204 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 13.96s 2026-04-08 01:07:00.640209 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 13.10s 2026-04-08 01:07:00.640215 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.42s 2026-04-08 01:07:00.640221 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.24s 2026-04-08 01:07:00.640227 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.07s 2026-04-08 01:07:00.640235 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.87s 2026-04-08 01:07:00.640244 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.69s 2026-04-08 01:07:00.640252 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.56s 2026-04-08 01:07:00.640258 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.53s 2026-04-08 01:07:00.640264 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.49s 2026-04-08 01:07:00.640270 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.06s 2026-04-08 01:07:00.640277 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.98s 2026-04-08 01:07:00.640283 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.57s 2026-04-08 01:07:00.640289 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.22s 2026-04-08 01:07:00.640296 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.14s 2026-04-08 01:07:00.640301 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.12s 2026-04-08 01:07:00.640307 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.05s 2026-04-08 01:07:00.640313 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.76s 2026-04-08 01:07:00.640319 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.73s 2026-04-08 01:07:00.640327 | orchestrator | 2026-04-08 01:07:00 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:00.641848 | orchestrator | 2026-04-08 01:07:00 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:07:00.642110 | orchestrator | 2026-04-08 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:03.691826 | orchestrator | 2026-04-08 01:07:03 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:03.693181 | orchestrator | 2026-04-08 01:07:03 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:03.695742 | orchestrator | 2026-04-08 01:07:03 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:07:03.695815 | orchestrator | 2026-04-08 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:06.741819 | orchestrator | 2026-04-08 01:07:06 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:06.743557 | orchestrator | 2026-04-08 01:07:06 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:06.744179 | orchestrator | 2026-04-08 01:07:06 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:07:06.744226 | orchestrator | 2026-04-08 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:09.788828 | orchestrator | 2026-04-08 01:07:09 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:09.790327 | orchestrator | 2026-04-08 01:07:09 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:09.791829 | orchestrator | 2026-04-08 01:07:09 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:07:09.791885 | orchestrator | 2026-04-08 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:12.841020 | orchestrator | 2026-04-08 01:07:12 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:12.842398 | orchestrator | 2026-04-08 01:07:12 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:12.844124 | orchestrator | 2026-04-08 01:07:12 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:07:12.844170 | orchestrator | 2026-04-08 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:15.879386 | orchestrator | 2026-04-08 01:07:15 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:15.880115 | orchestrator | 2026-04-08 01:07:15 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:15.881359 | orchestrator | 2026-04-08 01:07:15 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:07:15.881393 | orchestrator | 2026-04-08 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:18.913937 | orchestrator | 2026-04-08 01:07:18 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:18.914341 | orchestrator | 2026-04-08 01:07:18 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:18.916014 | orchestrator | 2026-04-08 01:07:18 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:07:18.916059 | orchestrator | 2026-04-08 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:21.937946 | orchestrator | 2026-04-08 01:07:21 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:21.938157 | orchestrator | 2026-04-08 01:07:21 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:21.939862 | orchestrator | 2026-04-08 01:07:21 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:07:21.939889 | orchestrator | 2026-04-08 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:24.966189 | orchestrator | 2026-04-08 01:07:24 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:24.966504 | orchestrator | 2026-04-08 01:07:24 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:24.967085 | orchestrator | 2026-04-08 01:07:24 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:07:24.967102 | orchestrator | 2026-04-08 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:28.005892 | orchestrator | 2026-04-08 01:07:28 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:28.007458 | orchestrator | 2026-04-08 01:07:28 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:28.016647 | orchestrator | 2026-04-08 01:07:28 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:07:28.016765 | orchestrator | 2026-04-08 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:31.066632 | orchestrator | 2026-04-08 01:07:31 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:31.068518 | orchestrator | 2026-04-08 01:07:31 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:31.070471 | orchestrator | 2026-04-08 01:07:31 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:07:31.070595 | orchestrator | 2026-04-08 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:34.109983 | orchestrator | 2026-04-08 01:07:34 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:34.112425 | orchestrator | 2026-04-08 01:07:34 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:34.115063 | orchestrator | 2026-04-08 01:07:34 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:07:34.115100 | orchestrator | 2026-04-08 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:37.163557 | orchestrator | 2026-04-08 01:07:37 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:37.165378 | orchestrator | 2026-04-08 01:07:37 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:37.167556 | orchestrator | 2026-04-08 01:07:37 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:07:37.167622 | orchestrator | 2026-04-08 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:40.207488 | orchestrator | 2026-04-08 01:07:40 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:40.210074 | orchestrator | 2026-04-08 01:07:40 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:40.212487 | orchestrator | 2026-04-08 01:07:40 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state STARTED 2026-04-08 01:07:40.212546 | orchestrator | 2026-04-08 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:43.240374 | orchestrator | 2026-04-08 01:07:43 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:43.241982 | orchestrator | 2026-04-08 01:07:43 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:43.246116 | orchestrator | 2026-04-08 01:07:43 | INFO  | Task 01fd80e7-697b-4a42-bbd4-ab3b4c91f2e3 is in state SUCCESS 2026-04-08 01:07:43.247898 | orchestrator | 2026-04-08 01:07:43.247940 | orchestrator | 2026-04-08 01:07:43.247947 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 01:07:43.247952 | orchestrator | 2026-04-08 01:07:43.247956 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 01:07:43.247961 | orchestrator | Wednesday 08 April 2026 01:05:40 +0000 (0:00:00.266) 0:00:00.266 ******* 2026-04-08 01:07:43.247965 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:07:43.247970 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:07:43.247974 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:07:43.247978 | orchestrator | 2026-04-08 01:07:43.247982 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 01:07:43.247987 | orchestrator | Wednesday 08 April 2026 01:05:41 +0000 (0:00:00.247) 0:00:00.513 ******* 2026-04-08 01:07:43.247993 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-08 01:07:43.248000 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-08 01:07:43.248005 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-08 01:07:43.248011 | orchestrator | 2026-04-08 01:07:43.248016 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-08 01:07:43.248021 | orchestrator | 2026-04-08 01:07:43.248027 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-08 01:07:43.248037 | orchestrator | Wednesday 08 April 2026 01:05:41 +0000 (0:00:00.242) 0:00:00.756 ******* 2026-04-08 01:07:43.248044 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:07:43.248051 | orchestrator | 2026-04-08 01:07:43.248057 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-08 01:07:43.248063 | orchestrator | Wednesday 08 April 2026 01:05:41 +0000 (0:00:00.508) 0:00:01.265 ******* 2026-04-08 01:07:43.248094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 01:07:43.248104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 01:07:43.248111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 01:07:43.248117 | orchestrator | 2026-04-08 01:07:43.248123 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-08 01:07:43.248129 | orchestrator | Wednesday 08 April 2026 01:05:42 +0000 (0:00:00.901) 0:00:02.167 ******* 2026-04-08 01:07:43.248136 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-08 01:07:43.248155 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-08 01:07:43.248161 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 01:07:43.248168 | orchestrator | 2026-04-08 01:07:43.248174 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-08 01:07:43.248178 | orchestrator | Wednesday 08 April 2026 01:05:43 +0000 (0:00:00.879) 0:00:03.046 ******* 2026-04-08 01:07:43.248182 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:07:43.248186 | orchestrator | 2026-04-08 01:07:43.248190 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-08 01:07:43.248194 | orchestrator | Wednesday 08 April 2026 01:05:44 +0000 (0:00:00.492) 0:00:03.539 ******* 2026-04-08 01:07:43.248210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 01:07:43.248214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 01:07:43.248223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 01:07:43.248227 | orchestrator | 2026-04-08 01:07:43.248231 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-08 01:07:43.248235 | orchestrator | Wednesday 08 April 2026 01:05:45 +0000 (0:00:01.472) 0:00:05.011 ******* 2026-04-08 01:07:43.248239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-08 01:07:43.248243 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:07:43.248247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-08 01:07:43.248254 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:07:43.248261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-08 01:07:43.248265 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:07:43.248269 | orchestrator | 2026-04-08 01:07:43.248273 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-08 01:07:43.248277 | orchestrator | Wednesday 08 April 2026 01:05:46 +0000 (0:00:00.367) 0:00:05.379 ******* 2026-04-08 01:07:43.248287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-08 01:07:43.248291 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:07:43.248295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-08 01:07:43.248299 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:07:43.248303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-08 01:07:43.248307 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:07:43.248311 | orchestrator | 2026-04-08 01:07:43.248314 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-08 01:07:43.248318 | orchestrator | Wednesday 08 April 2026 01:05:46 +0000 (0:00:00.712) 0:00:06.091 ******* 2026-04-08 01:07:43.248322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 01:07:43.248329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 01:07:43.248340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 01:07:43.248351 | orchestrator | 2026-04-08 01:07:43.248357 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-08 01:07:43.248363 | orchestrator | Wednesday 08 April 2026 01:05:48 +0000 (0:00:01.256) 0:00:07.347 ******* 2026-04-08 01:07:43.248369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 01:07:43.248374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 01:07:43.248381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 01:07:43.248386 | orchestrator | 2026-04-08 01:07:43.248392 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-08 01:07:43.248398 | orchestrator | Wednesday 08 April 2026 01:05:49 +0000 (0:00:01.230) 0:00:08.578 ******* 2026-04-08 01:07:43.248403 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:07:43.248409 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:07:43.248415 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:07:43.248421 | orchestrator | 2026-04-08 01:07:43.248427 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-08 01:07:43.248433 | orchestrator | Wednesday 08 April 2026 01:05:49 +0000 (0:00:00.285) 0:00:08.864 ******* 2026-04-08 01:07:43.248439 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-08 01:07:43.248446 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-08 01:07:43.248780 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-08 01:07:43.248786 | orchestrator | 2026-04-08 01:07:43.248791 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-08 01:07:43.248807 | orchestrator | Wednesday 08 April 2026 01:05:50 +0000 (0:00:01.119) 0:00:09.983 ******* 2026-04-08 01:07:43.248813 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-08 01:07:43.248818 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-08 01:07:43.248823 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-08 01:07:43.248828 | orchestrator | 2026-04-08 01:07:43.248833 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-08 01:07:43.248838 | orchestrator | Wednesday 08 April 2026 01:05:51 +0000 (0:00:01.112) 0:00:11.096 ******* 2026-04-08 01:07:43.248849 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 01:07:43.248853 | orchestrator | 2026-04-08 01:07:43.248857 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-08 01:07:43.248861 | orchestrator | Wednesday 08 April 2026 01:05:52 +0000 (0:00:01.132) 0:00:12.228 ******* 2026-04-08 01:07:43.248865 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-08 01:07:43.248869 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-08 01:07:43.248874 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:07:43.248878 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:07:43.248882 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:07:43.248886 | orchestrator | 2026-04-08 01:07:43.248890 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-08 01:07:43.248894 | orchestrator | Wednesday 08 April 2026 01:05:53 +0000 (0:00:00.770) 0:00:12.999 ******* 2026-04-08 01:07:43.248898 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:07:43.248902 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:07:43.248906 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:07:43.248910 | orchestrator | 2026-04-08 01:07:43.248913 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-08 01:07:43.248917 | orchestrator | Wednesday 08 April 2026 01:05:53 +0000 (0:00:00.309) 0:00:13.309 ******* 2026-04-08 01:07:43.248923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1084657, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9109695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.248928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1084657, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9109695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.248932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1084657, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9109695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.248943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1084696, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.921149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.248952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1084696, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.921149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.248956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1084696, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.921149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.248960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1084748, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9350088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.248964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1084748, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9350088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.248968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1084748, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9350088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.248975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1084687, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9163396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.248982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1084687, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9163396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.248990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1084687, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9163396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.248994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1084751, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.937232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.248998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1084751, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.937232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1084751, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.937232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1084672, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.912534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1084672, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.912534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1084672, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.912534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1084720, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9261496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1084720, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9261496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1084720, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9261496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1084741, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9338202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1084741, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9338202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1084741, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9338202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1084654, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9094987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1084654, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9094987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1084654, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9094987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1084664, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.911679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1084664, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.911679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1084664, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.911679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1084691, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9171495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1084691, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9171495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1084691, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9171495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1084725, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9291499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1084725, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9291499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1084725, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9291499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1084746, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.934612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1084746, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.934612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1084746, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.934612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1084682, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9148614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1084682, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9148614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1084682, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9148614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1084735, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.93115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1084735, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.93115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1084735, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.93115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1084759, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9379766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1084759, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9379766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1084759, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9379766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1084723, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9261496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1084723, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9261496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1084723, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9261496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1084718, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9235387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1084718, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9235387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1084718, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9235387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1084716, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.922903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1084716, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.922903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1084716, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.922903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1084732, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.93115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1084732, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.93115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1084732, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.93115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1084710, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.922541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1084710, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.922541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1084710, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.922541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1084744, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9338202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1084744, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9338202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1084744, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9338202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1084676, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.914189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1084676, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.914189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1084676, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.914189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084916, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9731512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084916, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9731512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084916, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9731512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084802, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9498627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084802, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9498627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084802, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9498627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084778, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9416392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084778, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9416392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084778, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9416392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1084838, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.953977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1084838, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.953977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1084838, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.953977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084766, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9384866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084766, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9384866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084766, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9384866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084881, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9636738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084881, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9636738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084881, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9636738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084845, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9609735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084845, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9609735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084845, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9609735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1084886, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.964419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1084886, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.964419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1084886, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.964419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084910, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9719045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084910, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9719045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084910, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9719045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1084879, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9623055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1084879, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9623055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1084879, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9623055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084831, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9522316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084831, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9522316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084831, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9522316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084790, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9441502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084790, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9441502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084822, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9515538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084822, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9515538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084790, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9441502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084781, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9426384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084781, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9426384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.249998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084822, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9515538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1084835, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9522316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1084835, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9522316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084781, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9426384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084897, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.971432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084897, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.971432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1084835, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9522316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084892, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.966151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084892, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.966151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084897, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.971432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084770, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9392905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084770, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9392905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084892, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.966151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084776, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9401503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084776, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9401503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084770, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9392905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084874, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9618964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084776, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9401503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084874, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9618964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1084889, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9655628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084874, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9618964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1084889, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9655628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1084889, 'dev': 109, 'nlink': 1, 'atime': 1775606552.0, 'mtime': 1775606552.0, 'ctime': 1775607399.9655628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-08 01:07:43.250169 | orchestrator | 2026-04-08 01:07:43.250174 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-08 01:07:43.250179 | orchestrator | Wednesday 08 April 2026 01:06:31 +0000 (0:00:37.849) 0:00:51.158 ******* 2026-04-08 01:07:43.250184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 01:07:43.250189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 01:07:43.250194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-08 01:07:43.250203 | orchestrator | 2026-04-08 01:07:43.250207 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-08 01:07:43.250212 | orchestrator | Wednesday 08 April 2026 01:06:32 +0000 (0:00:01.132) 0:00:52.290 ******* 2026-04-08 01:07:43.250216 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:07:43.250221 | orchestrator | 2026-04-08 01:07:43.250225 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-08 01:07:43.250230 | orchestrator | Wednesday 08 April 2026 01:06:35 +0000 (0:00:02.333) 0:00:54.624 ******* 2026-04-08 01:07:43.250235 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:07:43.250240 | orchestrator | 2026-04-08 01:07:43.250244 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-08 01:07:43.250251 | orchestrator | Wednesday 08 April 2026 01:06:37 +0000 (0:00:02.157) 0:00:56.782 ******* 2026-04-08 01:07:43.250256 | orchestrator | 2026-04-08 01:07:43.250261 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-08 01:07:43.250265 | orchestrator | Wednesday 08 April 2026 01:06:37 +0000 (0:00:00.079) 0:00:56.861 ******* 2026-04-08 01:07:43.250270 | orchestrator | 2026-04-08 01:07:43.250274 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-08 01:07:43.250279 | orchestrator | Wednesday 08 April 2026 01:06:37 +0000 (0:00:00.077) 0:00:56.939 ******* 2026-04-08 01:07:43.250284 | orchestrator | 2026-04-08 01:07:43.250289 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-08 01:07:43.250293 | orchestrator | Wednesday 08 April 2026 01:06:37 +0000 (0:00:00.081) 0:00:57.020 ******* 2026-04-08 01:07:43.250297 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:07:43.250302 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:07:43.250309 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:07:43.250314 | orchestrator | 2026-04-08 01:07:43.250318 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-08 01:07:43.250323 | orchestrator | Wednesday 08 April 2026 01:06:39 +0000 (0:00:01.809) 0:00:58.830 ******* 2026-04-08 01:07:43.250327 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:07:43.250332 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:07:43.250337 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-08 01:07:43.250343 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-08 01:07:43.250350 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:07:43.250356 | orchestrator | 2026-04-08 01:07:43.250363 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-08 01:07:43.250368 | orchestrator | Wednesday 08 April 2026 01:07:06 +0000 (0:00:27.190) 0:01:26.021 ******* 2026-04-08 01:07:43.250374 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:07:43.250380 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:07:43.250386 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:07:43.250392 | orchestrator | 2026-04-08 01:07:43.250398 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-08 01:07:43.250405 | orchestrator | Wednesday 08 April 2026 01:07:36 +0000 (0:00:30.025) 0:01:56.047 ******* 2026-04-08 01:07:43.250412 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:07:43.250418 | orchestrator | 2026-04-08 01:07:43.250424 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-08 01:07:43.250430 | orchestrator | Wednesday 08 April 2026 01:07:39 +0000 (0:00:02.348) 0:01:58.395 ******* 2026-04-08 01:07:43.250436 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:07:43.250443 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:07:43.250448 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:07:43.250454 | orchestrator | 2026-04-08 01:07:43.250460 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-08 01:07:43.250466 | orchestrator | Wednesday 08 April 2026 01:07:39 +0000 (0:00:00.267) 0:01:58.663 ******* 2026-04-08 01:07:43.250479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-08 01:07:43.250486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-08 01:07:43.250493 | orchestrator | 2026-04-08 01:07:43.250500 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-08 01:07:43.250506 | orchestrator | Wednesday 08 April 2026 01:07:42 +0000 (0:00:02.702) 0:02:01.365 ******* 2026-04-08 01:07:43.250512 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:07:43.250519 | orchestrator | 2026-04-08 01:07:43.250525 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:07:43.250532 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 01:07:43.250538 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 01:07:43.250542 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 01:07:43.250546 | orchestrator | 2026-04-08 01:07:43.250550 | orchestrator | 2026-04-08 01:07:43.250554 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:07:43.250558 | orchestrator | Wednesday 08 April 2026 01:07:42 +0000 (0:00:00.252) 0:02:01.618 ******* 2026-04-08 01:07:43.250561 | orchestrator | =============================================================================== 2026-04-08 01:07:43.250565 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.85s 2026-04-08 01:07:43.250570 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.03s 2026-04-08 01:07:43.250576 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.19s 2026-04-08 01:07:43.250582 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.70s 2026-04-08 01:07:43.250591 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.35s 2026-04-08 01:07:43.250598 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.33s 2026-04-08 01:07:43.250603 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.16s 2026-04-08 01:07:43.250610 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.81s 2026-04-08 01:07:43.250616 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.47s 2026-04-08 01:07:43.250623 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.26s 2026-04-08 01:07:43.250629 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.23s 2026-04-08 01:07:43.250635 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.13s 2026-04-08 01:07:43.250647 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.13s 2026-04-08 01:07:43.250651 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.12s 2026-04-08 01:07:43.250655 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.11s 2026-04-08 01:07:43.250659 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.90s 2026-04-08 01:07:43.250683 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.88s 2026-04-08 01:07:43.250688 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.77s 2026-04-08 01:07:43.250697 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.71s 2026-04-08 01:07:43.250700 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.51s 2026-04-08 01:07:43.250705 | orchestrator | 2026-04-08 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:46.292814 | orchestrator | 2026-04-08 01:07:46 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:46.294930 | orchestrator | 2026-04-08 01:07:46 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:46.295007 | orchestrator | 2026-04-08 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:49.360482 | orchestrator | 2026-04-08 01:07:49 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:49.360575 | orchestrator | 2026-04-08 01:07:49 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:49.360588 | orchestrator | 2026-04-08 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:52.398213 | orchestrator | 2026-04-08 01:07:52 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:52.399447 | orchestrator | 2026-04-08 01:07:52 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:52.399592 | orchestrator | 2026-04-08 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:55.444157 | orchestrator | 2026-04-08 01:07:55 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:55.444328 | orchestrator | 2026-04-08 01:07:55 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:55.444339 | orchestrator | 2026-04-08 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:58.484945 | orchestrator | 2026-04-08 01:07:58 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:07:58.485554 | orchestrator | 2026-04-08 01:07:58 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:07:58.485584 | orchestrator | 2026-04-08 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:01.541057 | orchestrator | 2026-04-08 01:08:01 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:08:01.543106 | orchestrator | 2026-04-08 01:08:01 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:01.543359 | orchestrator | 2026-04-08 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:04.587884 | orchestrator | 2026-04-08 01:08:04 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state STARTED 2026-04-08 01:08:04.588555 | orchestrator | 2026-04-08 01:08:04 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:04.588589 | orchestrator | 2026-04-08 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:07.636925 | orchestrator | 2026-04-08 01:08:07 | INFO  | Task 6d315da2-3abd-4fa1-b300-98b272ba8738 is in state SUCCESS 2026-04-08 01:08:07.638005 | orchestrator | 2026-04-08 01:08:07.638844 | orchestrator | 2026-04-08 01:08:07.638856 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 01:08:07.638864 | orchestrator | 2026-04-08 01:08:07.638870 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-08 01:08:07.638877 | orchestrator | Wednesday 08 April 2026 00:59:00 +0000 (0:00:00.299) 0:00:00.299 ******* 2026-04-08 01:08:07.638892 | orchestrator | changed: [testbed-manager] 2026-04-08 01:08:07.638915 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.638922 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:08:07.638996 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:08:07.639005 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:08:07.639885 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:08:07.639897 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:08:07.639903 | orchestrator | 2026-04-08 01:08:07.639910 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 01:08:07.639917 | orchestrator | Wednesday 08 April 2026 00:59:00 +0000 (0:00:00.717) 0:00:01.017 ******* 2026-04-08 01:08:07.639923 | orchestrator | changed: [testbed-manager] 2026-04-08 01:08:07.639929 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.639935 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:08:07.639941 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:08:07.639947 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:08:07.639954 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:08:07.639960 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:08:07.639966 | orchestrator | 2026-04-08 01:08:07.639972 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 01:08:07.639978 | orchestrator | Wednesday 08 April 2026 00:59:01 +0000 (0:00:00.771) 0:00:01.788 ******* 2026-04-08 01:08:07.639984 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-08 01:08:07.639991 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-08 01:08:07.639997 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-08 01:08:07.640003 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-08 01:08:07.640009 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-08 01:08:07.640014 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-08 01:08:07.640021 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-08 01:08:07.640026 | orchestrator | 2026-04-08 01:08:07.640032 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-08 01:08:07.640038 | orchestrator | 2026-04-08 01:08:07.640044 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-08 01:08:07.640050 | orchestrator | Wednesday 08 April 2026 00:59:02 +0000 (0:00:01.170) 0:00:02.959 ******* 2026-04-08 01:08:07.640056 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:08:07.640063 | orchestrator | 2026-04-08 01:08:07.640069 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-08 01:08:07.640075 | orchestrator | Wednesday 08 April 2026 00:59:03 +0000 (0:00:00.978) 0:00:03.937 ******* 2026-04-08 01:08:07.640081 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-08 01:08:07.640088 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-08 01:08:07.640094 | orchestrator | 2026-04-08 01:08:07.640100 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-08 01:08:07.640106 | orchestrator | Wednesday 08 April 2026 00:59:08 +0000 (0:00:04.859) 0:00:08.797 ******* 2026-04-08 01:08:07.640113 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-08 01:08:07.640119 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-08 01:08:07.640125 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.640131 | orchestrator | 2026-04-08 01:08:07.640136 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-08 01:08:07.640142 | orchestrator | Wednesday 08 April 2026 00:59:13 +0000 (0:00:04.504) 0:00:13.301 ******* 2026-04-08 01:08:07.640148 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.640153 | orchestrator | 2026-04-08 01:08:07.640159 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-08 01:08:07.640166 | orchestrator | Wednesday 08 April 2026 00:59:13 +0000 (0:00:00.616) 0:00:13.918 ******* 2026-04-08 01:08:07.640172 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.640179 | orchestrator | 2026-04-08 01:08:07.640185 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-08 01:08:07.640191 | orchestrator | Wednesday 08 April 2026 00:59:15 +0000 (0:00:01.380) 0:00:15.299 ******* 2026-04-08 01:08:07.640212 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.640219 | orchestrator | 2026-04-08 01:08:07.640225 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-08 01:08:07.640232 | orchestrator | Wednesday 08 April 2026 00:59:18 +0000 (0:00:03.050) 0:00:18.349 ******* 2026-04-08 01:08:07.640238 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.640244 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.640251 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.640257 | orchestrator | 2026-04-08 01:08:07.640264 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-08 01:08:07.640271 | orchestrator | Wednesday 08 April 2026 00:59:18 +0000 (0:00:00.656) 0:00:19.006 ******* 2026-04-08 01:08:07.640277 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:08:07.640285 | orchestrator | 2026-04-08 01:08:07.640291 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-08 01:08:07.640298 | orchestrator | Wednesday 08 April 2026 00:59:52 +0000 (0:00:33.843) 0:00:52.849 ******* 2026-04-08 01:08:07.640304 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.640311 | orchestrator | 2026-04-08 01:08:07.640317 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-08 01:08:07.640324 | orchestrator | Wednesday 08 April 2026 01:00:09 +0000 (0:00:16.531) 0:01:09.384 ******* 2026-04-08 01:08:07.640330 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:08:07.640337 | orchestrator | 2026-04-08 01:08:07.640343 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-08 01:08:07.640350 | orchestrator | Wednesday 08 April 2026 01:00:24 +0000 (0:00:15.444) 0:01:24.828 ******* 2026-04-08 01:08:07.640454 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:08:07.640465 | orchestrator | 2026-04-08 01:08:07.640472 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-08 01:08:07.640478 | orchestrator | Wednesday 08 April 2026 01:00:25 +0000 (0:00:00.648) 0:01:25.477 ******* 2026-04-08 01:08:07.640485 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.640539 | orchestrator | 2026-04-08 01:08:07.640547 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-08 01:08:07.640612 | orchestrator | Wednesday 08 April 2026 01:00:25 +0000 (0:00:00.505) 0:01:25.983 ******* 2026-04-08 01:08:07.640620 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:08:07.640627 | orchestrator | 2026-04-08 01:08:07.640634 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-08 01:08:07.640656 | orchestrator | Wednesday 08 April 2026 01:00:26 +0000 (0:00:00.757) 0:01:26.740 ******* 2026-04-08 01:08:07.640662 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:08:07.640668 | orchestrator | 2026-04-08 01:08:07.640674 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-08 01:08:07.640680 | orchestrator | Wednesday 08 April 2026 01:00:46 +0000 (0:00:20.140) 0:01:46.881 ******* 2026-04-08 01:08:07.640718 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.640726 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.640783 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.640791 | orchestrator | 2026-04-08 01:08:07.640798 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-08 01:08:07.640804 | orchestrator | 2026-04-08 01:08:07.640811 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-08 01:08:07.640818 | orchestrator | Wednesday 08 April 2026 01:00:46 +0000 (0:00:00.376) 0:01:47.258 ******* 2026-04-08 01:08:07.640825 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:08:07.640832 | orchestrator | 2026-04-08 01:08:07.640838 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-08 01:08:07.640845 | orchestrator | Wednesday 08 April 2026 01:00:47 +0000 (0:00:00.875) 0:01:48.133 ******* 2026-04-08 01:08:07.640852 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.640869 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.640876 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.640882 | orchestrator | 2026-04-08 01:08:07.640888 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-08 01:08:07.640895 | orchestrator | Wednesday 08 April 2026 01:00:50 +0000 (0:00:02.358) 0:01:50.492 ******* 2026-04-08 01:08:07.640900 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.640906 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.640912 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.640917 | orchestrator | 2026-04-08 01:08:07.640924 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-08 01:08:07.640929 | orchestrator | Wednesday 08 April 2026 01:00:52 +0000 (0:00:02.620) 0:01:53.112 ******* 2026-04-08 01:08:07.640934 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.640941 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.640946 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.640951 | orchestrator | 2026-04-08 01:08:07.640957 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-08 01:08:07.640963 | orchestrator | Wednesday 08 April 2026 01:00:53 +0000 (0:00:00.497) 0:01:53.610 ******* 2026-04-08 01:08:07.640969 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-08 01:08:07.640975 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.640981 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-08 01:08:07.640987 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.640993 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-08 01:08:07.640999 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-08 01:08:07.641005 | orchestrator | 2026-04-08 01:08:07.641011 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-08 01:08:07.641018 | orchestrator | Wednesday 08 April 2026 01:01:02 +0000 (0:00:08.912) 0:02:02.522 ******* 2026-04-08 01:08:07.641023 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.641029 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.641034 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.641040 | orchestrator | 2026-04-08 01:08:07.641046 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-08 01:08:07.641052 | orchestrator | Wednesday 08 April 2026 01:01:02 +0000 (0:00:00.577) 0:02:03.099 ******* 2026-04-08 01:08:07.641057 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-08 01:08:07.641063 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.641069 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-08 01:08:07.641075 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.641080 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-08 01:08:07.641086 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.641092 | orchestrator | 2026-04-08 01:08:07.641097 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-08 01:08:07.641104 | orchestrator | Wednesday 08 April 2026 01:01:03 +0000 (0:00:01.120) 0:02:04.220 ******* 2026-04-08 01:08:07.641110 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.641115 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.641121 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.641127 | orchestrator | 2026-04-08 01:08:07.641133 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-08 01:08:07.641138 | orchestrator | Wednesday 08 April 2026 01:01:04 +0000 (0:00:00.539) 0:02:04.759 ******* 2026-04-08 01:08:07.641144 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.641150 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.641156 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.641161 | orchestrator | 2026-04-08 01:08:07.641168 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-08 01:08:07.641174 | orchestrator | Wednesday 08 April 2026 01:01:05 +0000 (0:00:01.076) 0:02:05.836 ******* 2026-04-08 01:08:07.641188 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.641193 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.641271 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.641281 | orchestrator | 2026-04-08 01:08:07.641287 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-08 01:08:07.641293 | orchestrator | Wednesday 08 April 2026 01:01:07 +0000 (0:00:02.082) 0:02:07.918 ******* 2026-04-08 01:08:07.641298 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.641304 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.641309 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:08:07.641315 | orchestrator | 2026-04-08 01:08:07.641328 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-08 01:08:07.641335 | orchestrator | Wednesday 08 April 2026 01:01:30 +0000 (0:00:22.535) 0:02:30.454 ******* 2026-04-08 01:08:07.641341 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.641346 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.641351 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:08:07.641357 | orchestrator | 2026-04-08 01:08:07.641362 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-08 01:08:07.641368 | orchestrator | Wednesday 08 April 2026 01:01:44 +0000 (0:00:14.219) 0:02:44.674 ******* 2026-04-08 01:08:07.641374 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:08:07.641380 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.641385 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.641391 | orchestrator | 2026-04-08 01:08:07.641397 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-08 01:08:07.641403 | orchestrator | Wednesday 08 April 2026 01:01:45 +0000 (0:00:00.828) 0:02:45.502 ******* 2026-04-08 01:08:07.641408 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.641414 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.641420 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.641425 | orchestrator | 2026-04-08 01:08:07.641431 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-08 01:08:07.641436 | orchestrator | Wednesday 08 April 2026 01:02:00 +0000 (0:00:14.925) 0:03:00.428 ******* 2026-04-08 01:08:07.641441 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.641447 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.641453 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.641458 | orchestrator | 2026-04-08 01:08:07.641464 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-08 01:08:07.641469 | orchestrator | Wednesday 08 April 2026 01:02:02 +0000 (0:00:01.844) 0:03:02.272 ******* 2026-04-08 01:08:07.641475 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.641480 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.641486 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.641492 | orchestrator | 2026-04-08 01:08:07.641497 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-08 01:08:07.641503 | orchestrator | 2026-04-08 01:08:07.641508 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-08 01:08:07.641515 | orchestrator | Wednesday 08 April 2026 01:02:02 +0000 (0:00:00.296) 0:03:02.569 ******* 2026-04-08 01:08:07.641521 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:08:07.641528 | orchestrator | 2026-04-08 01:08:07.641534 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-08 01:08:07.641539 | orchestrator | Wednesday 08 April 2026 01:02:03 +0000 (0:00:00.800) 0:03:03.369 ******* 2026-04-08 01:08:07.641545 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-08 01:08:07.641550 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-08 01:08:07.641556 | orchestrator | 2026-04-08 01:08:07.641562 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-08 01:08:07.641567 | orchestrator | Wednesday 08 April 2026 01:02:06 +0000 (0:00:03.823) 0:03:07.193 ******* 2026-04-08 01:08:07.641585 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-08 01:08:07.641592 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-08 01:08:07.641598 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-08 01:08:07.641605 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-08 01:08:07.641611 | orchestrator | 2026-04-08 01:08:07.641616 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-08 01:08:07.641622 | orchestrator | Wednesday 08 April 2026 01:02:14 +0000 (0:00:07.666) 0:03:14.859 ******* 2026-04-08 01:08:07.641627 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-08 01:08:07.641633 | orchestrator | 2026-04-08 01:08:07.641661 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-08 01:08:07.641667 | orchestrator | Wednesday 08 April 2026 01:02:18 +0000 (0:00:03.949) 0:03:18.809 ******* 2026-04-08 01:08:07.641672 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-08 01:08:07.641678 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-08 01:08:07.641683 | orchestrator | 2026-04-08 01:08:07.641689 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-08 01:08:07.641694 | orchestrator | Wednesday 08 April 2026 01:02:23 +0000 (0:00:04.836) 0:03:23.646 ******* 2026-04-08 01:08:07.641700 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-08 01:08:07.641707 | orchestrator | 2026-04-08 01:08:07.641712 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-08 01:08:07.641718 | orchestrator | Wednesday 08 April 2026 01:02:27 +0000 (0:00:03.780) 0:03:27.427 ******* 2026-04-08 01:08:07.641724 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-08 01:08:07.641730 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-08 01:08:07.641735 | orchestrator | 2026-04-08 01:08:07.641741 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-08 01:08:07.641893 | orchestrator | Wednesday 08 April 2026 01:02:35 +0000 (0:00:08.579) 0:03:36.007 ******* 2026-04-08 01:08:07.641920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 01:08:07.641932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.641948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 01:08:07.641984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 01:08:07.641998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.642006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.642038 | orchestrator | 2026-04-08 01:08:07.642046 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-08 01:08:07.642052 | orchestrator | Wednesday 08 April 2026 01:02:38 +0000 (0:00:02.671) 0:03:38.678 ******* 2026-04-08 01:08:07.642063 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.642070 | orchestrator | 2026-04-08 01:08:07.642076 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-08 01:08:07.642082 | orchestrator | Wednesday 08 April 2026 01:02:38 +0000 (0:00:00.346) 0:03:39.026 ******* 2026-04-08 01:08:07.642089 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.642095 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.642101 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.642108 | orchestrator | 2026-04-08 01:08:07.642115 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-08 01:08:07.642121 | orchestrator | Wednesday 08 April 2026 01:02:39 +0000 (0:00:00.740) 0:03:39.768 ******* 2026-04-08 01:08:07.642128 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 01:08:07.642134 | orchestrator | 2026-04-08 01:08:07.642140 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-08 01:08:07.642146 | orchestrator | Wednesday 08 April 2026 01:02:40 +0000 (0:00:01.450) 0:03:41.219 ******* 2026-04-08 01:08:07.642152 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.642158 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.642164 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.642170 | orchestrator | 2026-04-08 01:08:07.642176 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-08 01:08:07.642181 | orchestrator | Wednesday 08 April 2026 01:02:41 +0000 (0:00:00.392) 0:03:41.611 ******* 2026-04-08 01:08:07.642187 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:08:07.642193 | orchestrator | 2026-04-08 01:08:07.642199 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-08 01:08:07.642204 | orchestrator | Wednesday 08 April 2026 01:02:42 +0000 (0:00:01.066) 0:03:42.678 ******* 2026-04-08 01:08:07.642210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 01:08:07.642246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 01:08:07.642259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 01:08:07.642266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.642272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.642295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.642302 | orchestrator | 2026-04-08 01:08:07.642308 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-08 01:08:07.642313 | orchestrator | Wednesday 08 April 2026 01:02:45 +0000 (0:00:03.375) 0:03:46.053 ******* 2026-04-08 01:08:07.642328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-08 01:08:07.642340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.642346 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.642352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-08 01:08:07.642381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-08 01:08:07.642394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.642400 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.642406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.642412 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.642418 | orchestrator | 2026-04-08 01:08:07.642424 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-08 01:08:07.642429 | orchestrator | Wednesday 08 April 2026 01:02:46 +0000 (0:00:00.472) 0:03:46.526 ******* 2026-04-08 01:08:07.642436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-08 01:08:07.642443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.642449 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.642481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-08 01:08:07.642495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.642501 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.642508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-08 01:08:07.642515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.642522 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.642527 | orchestrator | 2026-04-08 01:08:07.642534 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-08 01:08:07.642541 | orchestrator | Wednesday 08 April 2026 01:02:47 +0000 (0:00:01.319) 0:03:47.845 ******* 2026-04-08 01:08:07.642568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 01:08:07.642581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 01:08:07.642588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 01:08:07.642594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.642618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.642633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.642691 | orchestrator | 2026-04-08 01:08:07.642698 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-08 01:08:07.642704 | orchestrator | Wednesday 08 April 2026 01:02:50 +0000 (0:00:02.493) 0:03:50.338 ******* 2026-04-08 01:08:07.642711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 01:08:07.642719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 01:08:07.642748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 01:08:07.642761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.642767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.642774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.642780 | orchestrator | 2026-04-08 01:08:07.642786 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-08 01:08:07.642792 | orchestrator | Wednesday 08 April 2026 01:02:59 +0000 (0:00:09.536) 0:03:59.875 ******* 2026-04-08 01:08:07.642798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-08 01:08:07.642836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.642844 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.642854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-08 01:08:07.642861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.642868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-08 01:08:07.642874 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.642884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.642890 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.642896 | orchestrator | 2026-04-08 01:08:07.642901 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-08 01:08:07.642907 | orchestrator | Wednesday 08 April 2026 01:03:00 +0000 (0:00:00.947) 0:04:00.823 ******* 2026-04-08 01:08:07.642912 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.642918 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:08:07.642923 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:08:07.642928 | orchestrator | 2026-04-08 01:08:07.642951 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-08 01:08:07.642958 | orchestrator | Wednesday 08 April 2026 01:03:02 +0000 (0:00:02.339) 0:04:03.163 ******* 2026-04-08 01:08:07.642963 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.642969 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.642975 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.642980 | orchestrator | 2026-04-08 01:08:07.642993 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-08 01:08:07.643000 | orchestrator | Wednesday 08 April 2026 01:03:03 +0000 (0:00:00.335) 0:04:03.498 ******* 2026-04-08 01:08:07.643007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 01:08:07.643013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 01:08:07.643045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-08 01:08:07.643055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643075 | orchestrator | 2026-04-08 01:08:07.643080 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-08 01:08:07.643086 | orchestrator | Wednesday 08 April 2026 01:03:05 +0000 (0:00:02.289) 0:04:05.788 ******* 2026-04-08 01:08:07.643092 | orchestrator | 2026-04-08 01:08:07.643097 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-08 01:08:07.643103 | orchestrator | Wednesday 08 April 2026 01:03:05 +0000 (0:00:00.322) 0:04:06.111 ******* 2026-04-08 01:08:07.643109 | orchestrator | 2026-04-08 01:08:07.643117 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-08 01:08:07.643123 | orchestrator | Wednesday 08 April 2026 01:03:06 +0000 (0:00:00.209) 0:04:06.321 ******* 2026-04-08 01:08:07.643128 | orchestrator | 2026-04-08 01:08:07.643144 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-08 01:08:07.643151 | orchestrator | Wednesday 08 April 2026 01:03:06 +0000 (0:00:00.462) 0:04:06.783 ******* 2026-04-08 01:08:07.643157 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.643163 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:08:07.643168 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:08:07.643174 | orchestrator | 2026-04-08 01:08:07.643180 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-08 01:08:07.643186 | orchestrator | Wednesday 08 April 2026 01:03:27 +0000 (0:00:21.391) 0:04:28.174 ******* 2026-04-08 01:08:07.643193 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.643199 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:08:07.643204 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:08:07.643210 | orchestrator | 2026-04-08 01:08:07.643216 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-08 01:08:07.643222 | orchestrator | 2026-04-08 01:08:07.643226 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-08 01:08:07.643230 | orchestrator | Wednesday 08 April 2026 01:03:35 +0000 (0:00:07.530) 0:04:35.705 ******* 2026-04-08 01:08:07.643235 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:08:07.643240 | orchestrator | 2026-04-08 01:08:07.643244 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-08 01:08:07.643247 | orchestrator | Wednesday 08 April 2026 01:03:36 +0000 (0:00:01.031) 0:04:36.736 ******* 2026-04-08 01:08:07.643251 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.643255 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.643259 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.643263 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.643266 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.643270 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.643274 | orchestrator | 2026-04-08 01:08:07.643277 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-08 01:08:07.643281 | orchestrator | Wednesday 08 April 2026 01:03:37 +0000 (0:00:00.612) 0:04:37.349 ******* 2026-04-08 01:08:07.643285 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.643289 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.643292 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.643296 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 01:08:07.643300 | orchestrator | 2026-04-08 01:08:07.643304 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-08 01:08:07.643325 | orchestrator | Wednesday 08 April 2026 01:03:37 +0000 (0:00:00.889) 0:04:38.238 ******* 2026-04-08 01:08:07.643330 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-08 01:08:07.643335 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-08 01:08:07.643338 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-08 01:08:07.643342 | orchestrator | 2026-04-08 01:08:07.643346 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-08 01:08:07.643353 | orchestrator | Wednesday 08 April 2026 01:03:39 +0000 (0:00:01.097) 0:04:39.335 ******* 2026-04-08 01:08:07.643357 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-08 01:08:07.643361 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-08 01:08:07.643365 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-08 01:08:07.643369 | orchestrator | 2026-04-08 01:08:07.643373 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-08 01:08:07.643377 | orchestrator | Wednesday 08 April 2026 01:03:40 +0000 (0:00:01.243) 0:04:40.579 ******* 2026-04-08 01:08:07.643381 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-08 01:08:07.643384 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.643392 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-08 01:08:07.643396 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.643400 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-08 01:08:07.643404 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.643407 | orchestrator | 2026-04-08 01:08:07.643411 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-08 01:08:07.643415 | orchestrator | Wednesday 08 April 2026 01:03:40 +0000 (0:00:00.594) 0:04:41.173 ******* 2026-04-08 01:08:07.643419 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-08 01:08:07.643423 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-08 01:08:07.643427 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.643430 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-08 01:08:07.643434 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-08 01:08:07.643438 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.643442 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-08 01:08:07.643446 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-08 01:08:07.643449 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.643453 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-08 01:08:07.643457 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-08 01:08:07.643461 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-08 01:08:07.643465 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-08 01:08:07.643468 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-08 01:08:07.643472 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-08 01:08:07.643476 | orchestrator | 2026-04-08 01:08:07.643480 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-08 01:08:07.643483 | orchestrator | Wednesday 08 April 2026 01:03:42 +0000 (0:00:01.941) 0:04:43.115 ******* 2026-04-08 01:08:07.643487 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.643491 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.643495 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.643499 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:08:07.643502 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:08:07.643506 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:08:07.643510 | orchestrator | 2026-04-08 01:08:07.643514 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-08 01:08:07.643518 | orchestrator | Wednesday 08 April 2026 01:03:43 +0000 (0:00:01.128) 0:04:44.243 ******* 2026-04-08 01:08:07.643521 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.643525 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.643529 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.643533 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:08:07.643537 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:08:07.643540 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:08:07.643544 | orchestrator | 2026-04-08 01:08:07.643548 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-08 01:08:07.643552 | orchestrator | Wednesday 08 April 2026 01:03:45 +0000 (0:00:01.895) 0:04:46.139 ******* 2026-04-08 01:08:07.643557 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643584 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643592 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643606 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643676 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643686 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643695 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643723 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643732 | orchestrator | 2026-04-08 01:08:07.643736 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-08 01:08:07.643740 | orchestrator | Wednesday 08 April 2026 01:03:48 +0000 (0:00:02.193) 0:04:48.333 ******* 2026-04-08 01:08:07.643744 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:08:07.643750 | orchestrator | 2026-04-08 01:08:07.643753 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-08 01:08:07.643757 | orchestrator | Wednesday 08 April 2026 01:03:49 +0000 (0:00:01.229) 0:04:49.562 ******* 2026-04-08 01:08:07.643761 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643794 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643811 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643819 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643845 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643883 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.643888 | orchestrator | 2026-04-08 01:08:07.643894 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-08 01:08:07.643899 | orchestrator | Wednesday 08 April 2026 01:03:53 +0000 (0:00:03.979) 0:04:53.542 ******* 2026-04-08 01:08:07.643929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-08 01:08:07.643937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-08 01:08:07.643944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-08 01:08:07.643950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-08 01:08:07.643963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.643970 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.643990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.643995 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.644004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-08 01:08:07.644008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-08 01:08:07.644012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.644020 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.644024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-08 01:08:07.644028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.644032 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.644047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-08 01:08:07.644055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.644059 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.644063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-08 01:08:07.644067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.644075 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.644079 | orchestrator | 2026-04-08 01:08:07.644083 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-08 01:08:07.644087 | orchestrator | Wednesday 08 April 2026 01:03:55 +0000 (0:00:01.836) 0:04:55.378 ******* 2026-04-08 01:08:07.644094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-08 01:08:07.644101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-08 01:08:07.644122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.644131 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.644141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-08 01:08:07.644148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.644155 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.644162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-08 01:08:07.644172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-08 01:08:07.644176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-08 01:08:07.644193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.644198 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.644204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.644209 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.644213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-08 01:08:07.644220 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-08 01:08:07.644224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.644228 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.644232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-08 01:08:07.644248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.644253 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.644257 | orchestrator | 2026-04-08 01:08:07.644263 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-08 01:08:07.644267 | orchestrator | Wednesday 08 April 2026 01:03:58 +0000 (0:00:03.452) 0:04:58.832 ******* 2026-04-08 01:08:07.644271 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.644275 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.644279 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.644283 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 01:08:07.644287 | orchestrator | 2026-04-08 01:08:07.644290 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-08 01:08:07.644294 | orchestrator | Wednesday 08 April 2026 01:03:59 +0000 (0:00:00.822) 0:04:59.654 ******* 2026-04-08 01:08:07.644298 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-08 01:08:07.644305 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-08 01:08:07.644309 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-08 01:08:07.644313 | orchestrator | 2026-04-08 01:08:07.644316 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-08 01:08:07.644320 | orchestrator | Wednesday 08 April 2026 01:04:00 +0000 (0:00:00.834) 0:05:00.489 ******* 2026-04-08 01:08:07.644324 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-08 01:08:07.644328 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-08 01:08:07.644332 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-08 01:08:07.644335 | orchestrator | 2026-04-08 01:08:07.644339 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-08 01:08:07.644344 | orchestrator | Wednesday 08 April 2026 01:04:01 +0000 (0:00:01.025) 0:05:01.515 ******* 2026-04-08 01:08:07.644351 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:08:07.644357 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:08:07.644366 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:08:07.644375 | orchestrator | 2026-04-08 01:08:07.644381 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-08 01:08:07.644387 | orchestrator | Wednesday 08 April 2026 01:04:01 +0000 (0:00:00.460) 0:05:01.975 ******* 2026-04-08 01:08:07.644394 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:08:07.644400 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:08:07.644406 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:08:07.644412 | orchestrator | 2026-04-08 01:08:07.644417 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-08 01:08:07.644423 | orchestrator | Wednesday 08 April 2026 01:04:02 +0000 (0:00:00.484) 0:05:02.460 ******* 2026-04-08 01:08:07.644430 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-08 01:08:07.644436 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-08 01:08:07.644446 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-08 01:08:07.644452 | orchestrator | 2026-04-08 01:08:07.644458 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-08 01:08:07.644464 | orchestrator | Wednesday 08 April 2026 01:04:03 +0000 (0:00:01.084) 0:05:03.544 ******* 2026-04-08 01:08:07.644470 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-08 01:08:07.644476 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-08 01:08:07.644483 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-08 01:08:07.644490 | orchestrator | 2026-04-08 01:08:07.644496 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-08 01:08:07.644502 | orchestrator | Wednesday 08 April 2026 01:04:04 +0000 (0:00:01.362) 0:05:04.906 ******* 2026-04-08 01:08:07.644510 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-08 01:08:07.644514 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-08 01:08:07.644517 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-08 01:08:07.644521 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-08 01:08:07.644525 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-08 01:08:07.644529 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-08 01:08:07.644532 | orchestrator | 2026-04-08 01:08:07.644537 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-08 01:08:07.644542 | orchestrator | Wednesday 08 April 2026 01:04:08 +0000 (0:00:03.796) 0:05:08.702 ******* 2026-04-08 01:08:07.644548 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.644553 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.644563 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.644570 | orchestrator | 2026-04-08 01:08:07.644576 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-08 01:08:07.644582 | orchestrator | Wednesday 08 April 2026 01:04:08 +0000 (0:00:00.331) 0:05:09.034 ******* 2026-04-08 01:08:07.644587 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.644600 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.644605 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.644610 | orchestrator | 2026-04-08 01:08:07.644616 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-08 01:08:07.644621 | orchestrator | Wednesday 08 April 2026 01:04:09 +0000 (0:00:00.427) 0:05:09.462 ******* 2026-04-08 01:08:07.644627 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:08:07.644633 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:08:07.644660 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:08:07.644668 | orchestrator | 2026-04-08 01:08:07.644673 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-08 01:08:07.644679 | orchestrator | Wednesday 08 April 2026 01:04:11 +0000 (0:00:02.021) 0:05:11.483 ******* 2026-04-08 01:08:07.644712 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-08 01:08:07.644720 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-08 01:08:07.644726 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-08 01:08:07.644740 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-08 01:08:07.644746 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-08 01:08:07.644751 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-08 01:08:07.644756 | orchestrator | 2026-04-08 01:08:07.644762 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-08 01:08:07.644768 | orchestrator | Wednesday 08 April 2026 01:04:14 +0000 (0:00:02.873) 0:05:14.356 ******* 2026-04-08 01:08:07.644774 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-08 01:08:07.644780 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-08 01:08:07.644785 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-08 01:08:07.644791 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-08 01:08:07.644797 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:08:07.644803 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-08 01:08:07.644808 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:08:07.644813 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-08 01:08:07.644818 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:08:07.644824 | orchestrator | 2026-04-08 01:08:07.644830 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-08 01:08:07.644836 | orchestrator | Wednesday 08 April 2026 01:04:17 +0000 (0:00:03.232) 0:05:17.589 ******* 2026-04-08 01:08:07.644842 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.644847 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.644853 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.644860 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-04-08 01:08:07.644866 | orchestrator | 2026-04-08 01:08:07.644872 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-08 01:08:07.644878 | orchestrator | Wednesday 08 April 2026 01:04:18 +0000 (0:00:01.618) 0:05:19.208 ******* 2026-04-08 01:08:07.644884 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-08 01:08:07.644890 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-08 01:08:07.644896 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-08 01:08:07.644901 | orchestrator | 2026-04-08 01:08:07.644907 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-08 01:08:07.644913 | orchestrator | Wednesday 08 April 2026 01:04:20 +0000 (0:00:01.484) 0:05:20.692 ******* 2026-04-08 01:08:07.644925 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.644931 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.644936 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.644942 | orchestrator | 2026-04-08 01:08:07.644948 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-08 01:08:07.644954 | orchestrator | Wednesday 08 April 2026 01:04:20 +0000 (0:00:00.348) 0:05:21.041 ******* 2026-04-08 01:08:07.644960 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.644966 | orchestrator | 2026-04-08 01:08:07.644972 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-08 01:08:07.644978 | orchestrator | Wednesday 08 April 2026 01:04:20 +0000 (0:00:00.192) 0:05:21.234 ******* 2026-04-08 01:08:07.644984 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.644991 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.644997 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.645004 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.645008 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.645012 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.645015 | orchestrator | 2026-04-08 01:08:07.645019 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-08 01:08:07.645023 | orchestrator | Wednesday 08 April 2026 01:04:22 +0000 (0:00:01.210) 0:05:22.444 ******* 2026-04-08 01:08:07.645027 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-08 01:08:07.645031 | orchestrator | 2026-04-08 01:08:07.645035 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-08 01:08:07.645038 | orchestrator | Wednesday 08 April 2026 01:04:22 +0000 (0:00:00.691) 0:05:23.136 ******* 2026-04-08 01:08:07.645042 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.645046 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.645050 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.645054 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.645057 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.645061 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.645065 | orchestrator | 2026-04-08 01:08:07.645069 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-08 01:08:07.645072 | orchestrator | Wednesday 08 April 2026 01:04:23 +0000 (0:00:00.622) 0:05:23.759 ******* 2026-04-08 01:08:07.645090 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645098 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645122 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645154 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645162 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645196 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645204 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645217 | orchestrator | 2026-04-08 01:08:07.645221 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-08 01:08:07.645225 | orchestrator | Wednesday 08 April 2026 01:04:27 +0000 (0:00:04.421) 0:05:28.181 ******* 2026-04-08 01:08:07.645228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-08 01:08:07.645233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-08 01:08:07.645237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-08 01:08:07.645247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-08 01:08:07.645251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-08 01:08:07.645264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-08 01:08:07.645268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645272 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645279 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645286 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.645314 | orchestrator | 2026-04-08 01:08:07.645318 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-08 01:08:07.645322 | orchestrator | Wednesday 08 April 2026 01:04:34 +0000 (0:00:06.791) 0:05:34.972 ******* 2026-04-08 01:08:07.645326 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.645330 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.645334 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.645337 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.645343 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.645347 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.645354 | orchestrator | 2026-04-08 01:08:07.645358 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-08 01:08:07.645362 | orchestrator | Wednesday 08 April 2026 01:04:36 +0000 (0:00:01.489) 0:05:36.462 ******* 2026-04-08 01:08:07.645366 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-08 01:08:07.645372 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-08 01:08:07.645376 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-08 01:08:07.645380 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-08 01:08:07.645384 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-08 01:08:07.645387 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-08 01:08:07.645391 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-08 01:08:07.645395 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.645399 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-08 01:08:07.645403 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.645407 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-08 01:08:07.645410 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.645414 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-08 01:08:07.645418 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-08 01:08:07.645422 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-08 01:08:07.645426 | orchestrator | 2026-04-08 01:08:07.645429 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-08 01:08:07.645433 | orchestrator | Wednesday 08 April 2026 01:04:40 +0000 (0:00:04.520) 0:05:40.982 ******* 2026-04-08 01:08:07.645437 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.645440 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.645444 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.645448 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.645452 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.645455 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.645459 | orchestrator | 2026-04-08 01:08:07.645463 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-08 01:08:07.645467 | orchestrator | Wednesday 08 April 2026 01:04:41 +0000 (0:00:00.671) 0:05:41.653 ******* 2026-04-08 01:08:07.645470 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-08 01:08:07.645475 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-08 01:08:07.645478 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-08 01:08:07.645482 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-08 01:08:07.645486 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-08 01:08:07.645490 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-08 01:08:07.645493 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-08 01:08:07.645497 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-08 01:08:07.645501 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-08 01:08:07.645507 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-08 01:08:07.645511 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.645515 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-08 01:08:07.645519 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.645522 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-08 01:08:07.645526 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.645530 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-08 01:08:07.645534 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-08 01:08:07.645537 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-08 01:08:07.645541 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-08 01:08:07.645547 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-08 01:08:07.645551 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-08 01:08:07.645555 | orchestrator | 2026-04-08 01:08:07.645559 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-08 01:08:07.645568 | orchestrator | Wednesday 08 April 2026 01:04:46 +0000 (0:00:04.894) 0:05:46.548 ******* 2026-04-08 01:08:07.645572 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-08 01:08:07.645576 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-08 01:08:07.645580 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-08 01:08:07.645584 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-08 01:08:07.645587 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-08 01:08:07.645591 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-08 01:08:07.645595 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-08 01:08:07.645599 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-08 01:08:07.645603 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-08 01:08:07.645606 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-08 01:08:07.645610 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-08 01:08:07.645614 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-08 01:08:07.645617 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.645621 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-08 01:08:07.645625 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-08 01:08:07.645629 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.645632 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-08 01:08:07.645636 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-08 01:08:07.645692 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.645698 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-08 01:08:07.645708 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-08 01:08:07.645712 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-08 01:08:07.645716 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-08 01:08:07.645720 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-08 01:08:07.645724 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-08 01:08:07.645727 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-08 01:08:07.645731 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-08 01:08:07.645735 | orchestrator | 2026-04-08 01:08:07.645739 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-08 01:08:07.645743 | orchestrator | Wednesday 08 April 2026 01:04:55 +0000 (0:00:08.919) 0:05:55.467 ******* 2026-04-08 01:08:07.645746 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.645750 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.645754 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.645758 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.645761 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.645765 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.645769 | orchestrator | 2026-04-08 01:08:07.645773 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-08 01:08:07.645777 | orchestrator | Wednesday 08 April 2026 01:04:55 +0000 (0:00:00.494) 0:05:55.961 ******* 2026-04-08 01:08:07.645780 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.645784 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.645788 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.645792 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.645795 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.645799 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.645803 | orchestrator | 2026-04-08 01:08:07.645807 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-08 01:08:07.645811 | orchestrator | Wednesday 08 April 2026 01:04:56 +0000 (0:00:00.668) 0:05:56.630 ******* 2026-04-08 01:08:07.645814 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.645818 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.645822 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:08:07.645826 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.645829 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:08:07.645833 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:08:07.645837 | orchestrator | 2026-04-08 01:08:07.645840 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-08 01:08:07.645844 | orchestrator | Wednesday 08 April 2026 01:04:58 +0000 (0:00:02.525) 0:05:59.156 ******* 2026-04-08 01:08:07.645848 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.645855 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:08:07.645859 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.645863 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.645866 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:08:07.645870 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:08:07.645874 | orchestrator | 2026-04-08 01:08:07.645878 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-08 01:08:07.645884 | orchestrator | Wednesday 08 April 2026 01:05:01 +0000 (0:00:02.946) 0:06:02.103 ******* 2026-04-08 01:08:07.645889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-08 01:08:07.645899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-08 01:08:07.645904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.645912 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.645921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-08 01:08:07.645928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-08 01:08:07.645943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.645956 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.645962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-08 01:08:07.645970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.645974 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.645979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-08 01:08:07.645985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-08 01:08:07.645995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.646000 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.646010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-08 01:08:07.646053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.646060 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.646066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-08 01:08:07.646072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 01:08:07.646077 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.646083 | orchestrator | 2026-04-08 01:08:07.646089 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-08 01:08:07.646095 | orchestrator | Wednesday 08 April 2026 01:05:03 +0000 (0:00:01.678) 0:06:03.781 ******* 2026-04-08 01:08:07.646101 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-08 01:08:07.646107 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-08 01:08:07.646113 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-08 01:08:07.646119 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-08 01:08:07.646124 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.646130 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-08 01:08:07.646135 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-08 01:08:07.646141 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.646147 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-08 01:08:07.646153 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-08 01:08:07.646159 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.646164 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-08 01:08:07.646170 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-08 01:08:07.646176 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.646181 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.646193 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-08 01:08:07.646198 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-08 01:08:07.646204 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.646210 | orchestrator | 2026-04-08 01:08:07.646216 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-08 01:08:07.646222 | orchestrator | Wednesday 08 April 2026 01:05:04 +0000 (0:00:00.813) 0:06:04.594 ******* 2026-04-08 01:08:07.646238 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-08 01:08:07.646246 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-08 01:08:07.646253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-08 01:08:07.646261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-08 01:08:07.646268 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-08 01:08:07.646283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-08 01:08:07.646293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-08 01:08:07.646300 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-08 01:08:07.646308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.646314 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-08 01:08:07.646321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.646333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.646351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.646356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.646360 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-08 01:08:07.646364 | orchestrator | 2026-04-08 01:08:07.646368 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-08 01:08:07.646372 | orchestrator | Wednesday 08 April 2026 01:05:07 +0000 (0:00:03.087) 0:06:07.681 ******* 2026-04-08 01:08:07.646376 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.646380 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.646383 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.646387 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.646391 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.646395 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.646398 | orchestrator | 2026-04-08 01:08:07.646402 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-08 01:08:07.646406 | orchestrator | Wednesday 08 April 2026 01:05:08 +0000 (0:00:01.214) 0:06:08.896 ******* 2026-04-08 01:08:07.646409 | orchestrator | 2026-04-08 01:08:07.646413 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-08 01:08:07.646417 | orchestrator | Wednesday 08 April 2026 01:05:08 +0000 (0:00:00.130) 0:06:09.026 ******* 2026-04-08 01:08:07.646425 | orchestrator | 2026-04-08 01:08:07.646429 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-08 01:08:07.646432 | orchestrator | Wednesday 08 April 2026 01:05:08 +0000 (0:00:00.233) 0:06:09.259 ******* 2026-04-08 01:08:07.646436 | orchestrator | 2026-04-08 01:08:07.646440 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-08 01:08:07.646443 | orchestrator | Wednesday 08 April 2026 01:05:09 +0000 (0:00:00.168) 0:06:09.428 ******* 2026-04-08 01:08:07.646447 | orchestrator | 2026-04-08 01:08:07.646451 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-08 01:08:07.646455 | orchestrator | Wednesday 08 April 2026 01:05:09 +0000 (0:00:00.142) 0:06:09.571 ******* 2026-04-08 01:08:07.646458 | orchestrator | 2026-04-08 01:08:07.646462 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-08 01:08:07.646466 | orchestrator | Wednesday 08 April 2026 01:05:09 +0000 (0:00:00.343) 0:06:09.914 ******* 2026-04-08 01:08:07.646469 | orchestrator | 2026-04-08 01:08:07.646473 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-08 01:08:07.646477 | orchestrator | Wednesday 08 April 2026 01:05:09 +0000 (0:00:00.137) 0:06:10.052 ******* 2026-04-08 01:08:07.646481 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.646484 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:08:07.646488 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:08:07.646492 | orchestrator | 2026-04-08 01:08:07.646496 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-08 01:08:07.646500 | orchestrator | Wednesday 08 April 2026 01:05:22 +0000 (0:00:12.506) 0:06:22.559 ******* 2026-04-08 01:08:07.646505 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.646511 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:08:07.646516 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:08:07.646522 | orchestrator | 2026-04-08 01:08:07.646527 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-08 01:08:07.646534 | orchestrator | Wednesday 08 April 2026 01:05:33 +0000 (0:00:11.605) 0:06:34.165 ******* 2026-04-08 01:08:07.646539 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:08:07.646545 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:08:07.646551 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:08:07.646557 | orchestrator | 2026-04-08 01:08:07.646566 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-08 01:08:07.646572 | orchestrator | Wednesday 08 April 2026 01:05:55 +0000 (0:00:21.432) 0:06:55.597 ******* 2026-04-08 01:08:07.646578 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:08:07.646583 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:08:07.646588 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:08:07.646594 | orchestrator | 2026-04-08 01:08:07.646600 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-08 01:08:07.646623 | orchestrator | Wednesday 08 April 2026 01:06:20 +0000 (0:00:25.170) 0:07:20.768 ******* 2026-04-08 01:08:07.646629 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:08:07.646635 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-04-08 01:08:07.646665 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-04-08 01:08:07.646671 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:08:07.646677 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:08:07.646683 | orchestrator | 2026-04-08 01:08:07.646689 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-08 01:08:07.646695 | orchestrator | Wednesday 08 April 2026 01:06:26 +0000 (0:00:06.101) 0:07:26.869 ******* 2026-04-08 01:08:07.646701 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:08:07.646707 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:08:07.646714 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:08:07.646720 | orchestrator | 2026-04-08 01:08:07.646726 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-08 01:08:07.646733 | orchestrator | Wednesday 08 April 2026 01:06:27 +0000 (0:00:00.662) 0:07:27.532 ******* 2026-04-08 01:08:07.646744 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:08:07.646748 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:08:07.646752 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:08:07.646755 | orchestrator | 2026-04-08 01:08:07.646759 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-08 01:08:07.646763 | orchestrator | Wednesday 08 April 2026 01:06:52 +0000 (0:00:25.242) 0:07:52.775 ******* 2026-04-08 01:08:07.646767 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.646771 | orchestrator | 2026-04-08 01:08:07.646775 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-08 01:08:07.646778 | orchestrator | Wednesday 08 April 2026 01:06:52 +0000 (0:00:00.308) 0:07:53.083 ******* 2026-04-08 01:08:07.646782 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.646786 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.646790 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.646794 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.646797 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.646801 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-08 01:08:07.646806 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-08 01:08:07.646810 | orchestrator | 2026-04-08 01:08:07.646814 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-08 01:08:07.646817 | orchestrator | Wednesday 08 April 2026 01:07:14 +0000 (0:00:21.831) 0:08:14.914 ******* 2026-04-08 01:08:07.646821 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.646825 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.646829 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.646833 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.646839 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.646845 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.646850 | orchestrator | 2026-04-08 01:08:07.646856 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-08 01:08:07.646866 | orchestrator | Wednesday 08 April 2026 01:07:22 +0000 (0:00:08.228) 0:08:23.143 ******* 2026-04-08 01:08:07.646874 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.646879 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.646885 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.646891 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.646896 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.646902 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-04-08 01:08:07.646907 | orchestrator | 2026-04-08 01:08:07.646913 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-08 01:08:07.646919 | orchestrator | Wednesday 08 April 2026 01:07:26 +0000 (0:00:03.199) 0:08:26.342 ******* 2026-04-08 01:08:07.646925 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-08 01:08:07.646930 | orchestrator | 2026-04-08 01:08:07.646937 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-08 01:08:07.646942 | orchestrator | Wednesday 08 April 2026 01:07:41 +0000 (0:00:15.340) 0:08:41.682 ******* 2026-04-08 01:08:07.646948 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-08 01:08:07.646953 | orchestrator | 2026-04-08 01:08:07.646959 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-08 01:08:07.646965 | orchestrator | Wednesday 08 April 2026 01:07:42 +0000 (0:00:01.263) 0:08:42.945 ******* 2026-04-08 01:08:07.646971 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.646977 | orchestrator | 2026-04-08 01:08:07.646983 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-08 01:08:07.646990 | orchestrator | Wednesday 08 April 2026 01:07:43 +0000 (0:00:01.293) 0:08:44.239 ******* 2026-04-08 01:08:07.647002 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-08 01:08:07.647008 | orchestrator | 2026-04-08 01:08:07.647014 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-08 01:08:07.647021 | orchestrator | Wednesday 08 April 2026 01:07:57 +0000 (0:00:13.872) 0:08:58.112 ******* 2026-04-08 01:08:07.647026 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:08:07.647030 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:08:07.647034 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:08:07.647038 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:08:07.647042 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:08:07.647045 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:08:07.647049 | orchestrator | 2026-04-08 01:08:07.647060 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-08 01:08:07.647064 | orchestrator | 2026-04-08 01:08:07.647067 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-08 01:08:07.647071 | orchestrator | Wednesday 08 April 2026 01:07:59 +0000 (0:00:01.849) 0:08:59.961 ******* 2026-04-08 01:08:07.647075 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:08:07.647079 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:08:07.647086 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:08:07.647091 | orchestrator | 2026-04-08 01:08:07.647097 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-08 01:08:07.647103 | orchestrator | 2026-04-08 01:08:07.647108 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-08 01:08:07.647118 | orchestrator | Wednesday 08 April 2026 01:08:00 +0000 (0:00:01.159) 0:09:01.121 ******* 2026-04-08 01:08:07.647124 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.647129 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.647135 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.647141 | orchestrator | 2026-04-08 01:08:07.647147 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-08 01:08:07.647153 | orchestrator | 2026-04-08 01:08:07.647159 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-08 01:08:07.647165 | orchestrator | Wednesday 08 April 2026 01:08:01 +0000 (0:00:00.530) 0:09:01.651 ******* 2026-04-08 01:08:07.647171 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-08 01:08:07.647177 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-08 01:08:07.647184 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-08 01:08:07.647188 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-08 01:08:07.647192 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-08 01:08:07.647196 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-08 01:08:07.647200 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:08:07.647204 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-08 01:08:07.647207 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-08 01:08:07.647211 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-08 01:08:07.647215 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-08 01:08:07.647219 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-08 01:08:07.647222 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-08 01:08:07.647226 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-08 01:08:07.647230 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-08 01:08:07.647234 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-08 01:08:07.647237 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-08 01:08:07.647241 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-08 01:08:07.647245 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-08 01:08:07.647249 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:08:07.647257 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-08 01:08:07.647261 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-08 01:08:07.647265 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-08 01:08:07.647269 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-08 01:08:07.647272 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-08 01:08:07.647276 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:08:07.647280 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-08 01:08:07.647284 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-08 01:08:07.647287 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-08 01:08:07.647291 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-08 01:08:07.647295 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-08 01:08:07.647299 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-08 01:08:07.647302 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-08 01:08:07.647306 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.647310 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.647313 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-08 01:08:07.647317 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-08 01:08:07.647321 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-08 01:08:07.647325 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-08 01:08:07.647329 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-08 01:08:07.647332 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-08 01:08:07.647336 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.647340 | orchestrator | 2026-04-08 01:08:07.647344 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-08 01:08:07.647347 | orchestrator | 2026-04-08 01:08:07.647351 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-08 01:08:07.647355 | orchestrator | Wednesday 08 April 2026 01:08:02 +0000 (0:00:01.252) 0:09:02.904 ******* 2026-04-08 01:08:07.647359 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-08 01:08:07.647363 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-08 01:08:07.647366 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.647370 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-08 01:08:07.647374 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-08 01:08:07.647382 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.647385 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-08 01:08:07.647389 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-08 01:08:07.647393 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.647397 | orchestrator | 2026-04-08 01:08:07.647401 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-08 01:08:07.647405 | orchestrator | 2026-04-08 01:08:07.647412 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-08 01:08:07.647416 | orchestrator | Wednesday 08 April 2026 01:08:03 +0000 (0:00:00.691) 0:09:03.595 ******* 2026-04-08 01:08:07.647420 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.647424 | orchestrator | 2026-04-08 01:08:07.647427 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-08 01:08:07.647431 | orchestrator | 2026-04-08 01:08:07.647435 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-08 01:08:07.647439 | orchestrator | Wednesday 08 April 2026 01:08:04 +0000 (0:00:00.671) 0:09:04.267 ******* 2026-04-08 01:08:07.647442 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:08:07.647446 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:08:07.647453 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:08:07.647457 | orchestrator | 2026-04-08 01:08:07.647461 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:08:07.647465 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 01:08:07.647470 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-08 01:08:07.647474 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-08 01:08:07.647478 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-08 01:08:07.647482 | orchestrator | testbed-node-3 : ok=41  changed=28  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-08 01:08:07.647486 | orchestrator | testbed-node-4 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-08 01:08:07.647489 | orchestrator | testbed-node-5 : ok=45  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-08 01:08:07.647493 | orchestrator | 2026-04-08 01:08:07.647497 | orchestrator | 2026-04-08 01:08:07.647501 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:08:07.647505 | orchestrator | Wednesday 08 April 2026 01:08:04 +0000 (0:00:00.572) 0:09:04.839 ******* 2026-04-08 01:08:07.647508 | orchestrator | =============================================================================== 2026-04-08 01:08:07.647512 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.84s 2026-04-08 01:08:07.647516 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.24s 2026-04-08 01:08:07.647520 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 25.17s 2026-04-08 01:08:07.647524 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.54s 2026-04-08 01:08:07.647527 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.83s 2026-04-08 01:08:07.647531 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.43s 2026-04-08 01:08:07.647535 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.39s 2026-04-08 01:08:07.647539 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.14s 2026-04-08 01:08:07.647542 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.53s 2026-04-08 01:08:07.647546 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.44s 2026-04-08 01:08:07.647550 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.34s 2026-04-08 01:08:07.647554 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.93s 2026-04-08 01:08:07.647557 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.22s 2026-04-08 01:08:07.647561 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.87s 2026-04-08 01:08:07.647565 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.51s 2026-04-08 01:08:07.647569 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.61s 2026-04-08 01:08:07.647572 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.54s 2026-04-08 01:08:07.647576 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.92s 2026-04-08 01:08:07.647580 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.91s 2026-04-08 01:08:07.647584 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.58s 2026-04-08 01:08:07.647591 | orchestrator | 2026-04-08 01:08:07 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:07.647595 | orchestrator | 2026-04-08 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:10.681630 | orchestrator | 2026-04-08 01:08:10 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:10.681841 | orchestrator | 2026-04-08 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:13.717864 | orchestrator | 2026-04-08 01:08:13 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:13.717957 | orchestrator | 2026-04-08 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:16.762109 | orchestrator | 2026-04-08 01:08:16 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:16.762216 | orchestrator | 2026-04-08 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:19.802503 | orchestrator | 2026-04-08 01:08:19 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:19.802568 | orchestrator | 2026-04-08 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:22.841757 | orchestrator | 2026-04-08 01:08:22 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:22.841831 | orchestrator | 2026-04-08 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:25.887997 | orchestrator | 2026-04-08 01:08:25 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:25.888091 | orchestrator | 2026-04-08 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:28.927721 | orchestrator | 2026-04-08 01:08:28 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:28.927792 | orchestrator | 2026-04-08 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:31.965467 | orchestrator | 2026-04-08 01:08:31 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:31.965560 | orchestrator | 2026-04-08 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:35.007394 | orchestrator | 2026-04-08 01:08:35 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:35.007482 | orchestrator | 2026-04-08 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:38.056091 | orchestrator | 2026-04-08 01:08:38 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:38.056182 | orchestrator | 2026-04-08 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:41.094963 | orchestrator | 2026-04-08 01:08:41 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:41.095033 | orchestrator | 2026-04-08 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:44.142747 | orchestrator | 2026-04-08 01:08:44 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:44.142823 | orchestrator | 2026-04-08 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:47.189177 | orchestrator | 2026-04-08 01:08:47 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:47.190566 | orchestrator | 2026-04-08 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:50.239182 | orchestrator | 2026-04-08 01:08:50 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:50.239260 | orchestrator | 2026-04-08 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:53.281875 | orchestrator | 2026-04-08 01:08:53 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:53.281965 | orchestrator | 2026-04-08 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:56.327566 | orchestrator | 2026-04-08 01:08:56 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:56.327673 | orchestrator | 2026-04-08 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:59.369494 | orchestrator | 2026-04-08 01:08:59 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:08:59.369583 | orchestrator | 2026-04-08 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:02.414302 | orchestrator | 2026-04-08 01:09:02 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:02.414371 | orchestrator | 2026-04-08 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:05.459465 | orchestrator | 2026-04-08 01:09:05 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:05.459525 | orchestrator | 2026-04-08 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:08.506553 | orchestrator | 2026-04-08 01:09:08 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:08.506616 | orchestrator | 2026-04-08 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:11.543835 | orchestrator | 2026-04-08 01:09:11 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:11.543920 | orchestrator | 2026-04-08 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:14.591082 | orchestrator | 2026-04-08 01:09:14 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:14.591176 | orchestrator | 2026-04-08 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:17.638940 | orchestrator | 2026-04-08 01:09:17 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:17.639000 | orchestrator | 2026-04-08 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:20.685609 | orchestrator | 2026-04-08 01:09:20 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:20.685694 | orchestrator | 2026-04-08 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:23.737623 | orchestrator | 2026-04-08 01:09:23 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:23.737741 | orchestrator | 2026-04-08 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:26.779494 | orchestrator | 2026-04-08 01:09:26 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:26.779547 | orchestrator | 2026-04-08 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:29.823611 | orchestrator | 2026-04-08 01:09:29 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:29.823736 | orchestrator | 2026-04-08 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:32.871966 | orchestrator | 2026-04-08 01:09:32 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:32.872362 | orchestrator | 2026-04-08 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:35.919889 | orchestrator | 2026-04-08 01:09:35 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:35.919980 | orchestrator | 2026-04-08 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:38.969844 | orchestrator | 2026-04-08 01:09:38 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:38.969933 | orchestrator | 2026-04-08 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:42.017886 | orchestrator | 2026-04-08 01:09:42 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:42.017946 | orchestrator | 2026-04-08 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:45.065386 | orchestrator | 2026-04-08 01:09:45 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:45.065474 | orchestrator | 2026-04-08 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:48.110198 | orchestrator | 2026-04-08 01:09:48 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:48.110278 | orchestrator | 2026-04-08 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:51.160854 | orchestrator | 2026-04-08 01:09:51 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:51.160930 | orchestrator | 2026-04-08 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:54.207220 | orchestrator | 2026-04-08 01:09:54 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:54.207308 | orchestrator | 2026-04-08 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:57.250958 | orchestrator | 2026-04-08 01:09:57 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:09:57.251045 | orchestrator | 2026-04-08 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:00.294458 | orchestrator | 2026-04-08 01:10:00 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:10:00.294554 | orchestrator | 2026-04-08 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:03.343386 | orchestrator | 2026-04-08 01:10:03 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:10:03.343438 | orchestrator | 2026-04-08 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:06.376240 | orchestrator | 2026-04-08 01:10:06 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:10:06.376325 | orchestrator | 2026-04-08 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:09.424546 | orchestrator | 2026-04-08 01:10:09 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:10:09.424614 | orchestrator | 2026-04-08 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:12.467474 | orchestrator | 2026-04-08 01:10:12 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:10:12.467527 | orchestrator | 2026-04-08 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:15.508044 | orchestrator | 2026-04-08 01:10:15 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:10:15.508135 | orchestrator | 2026-04-08 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:18.540104 | orchestrator | 2026-04-08 01:10:18 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:10:18.540165 | orchestrator | 2026-04-08 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:21.582731 | orchestrator | 2026-04-08 01:10:21 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:10:21.582838 | orchestrator | 2026-04-08 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:24.632536 | orchestrator | 2026-04-08 01:10:24 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:10:24.632598 | orchestrator | 2026-04-08 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:27.708930 | orchestrator | 2026-04-08 01:10:27 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:10:27.709010 | orchestrator | 2026-04-08 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:30.751380 | orchestrator | 2026-04-08 01:10:30 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:10:30.751437 | orchestrator | 2026-04-08 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:33.793454 | orchestrator | 2026-04-08 01:10:33 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:10:33.794051 | orchestrator | 2026-04-08 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:36.840940 | orchestrator | 2026-04-08 01:10:36 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:10:36.841033 | orchestrator | 2026-04-08 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:39.883328 | orchestrator | 2026-04-08 01:10:39 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state STARTED 2026-04-08 01:10:39.883377 | orchestrator | 2026-04-08 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:42.934604 | orchestrator | 2026-04-08 01:10:42 | INFO  | Task 3c45f1fe-d9da-460a-b981-4800177e066f is in state SUCCESS 2026-04-08 01:10:42.935704 | orchestrator | 2026-04-08 01:10:42.935747 | orchestrator | 2026-04-08 01:10:42.935786 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 01:10:42.935793 | orchestrator | 2026-04-08 01:10:42.935797 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 01:10:42.935801 | orchestrator | Wednesday 08 April 2026 01:05:53 +0000 (0:00:00.319) 0:00:00.320 ******* 2026-04-08 01:10:42.935806 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:10:42.935811 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:10:42.935815 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:10:42.935819 | orchestrator | 2026-04-08 01:10:42.935823 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 01:10:42.935827 | orchestrator | Wednesday 08 April 2026 01:05:53 +0000 (0:00:00.285) 0:00:00.605 ******* 2026-04-08 01:10:42.935831 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-08 01:10:42.935836 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-08 01:10:42.935840 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-08 01:10:42.935844 | orchestrator | 2026-04-08 01:10:42.935848 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-08 01:10:42.935851 | orchestrator | 2026-04-08 01:10:42.935855 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-08 01:10:42.935859 | orchestrator | Wednesday 08 April 2026 01:05:54 +0000 (0:00:00.336) 0:00:00.942 ******* 2026-04-08 01:10:42.935863 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:10:42.935867 | orchestrator | 2026-04-08 01:10:42.935871 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-08 01:10:42.935875 | orchestrator | Wednesday 08 April 2026 01:05:54 +0000 (0:00:00.766) 0:00:01.708 ******* 2026-04-08 01:10:42.935880 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-08 01:10:42.935883 | orchestrator | 2026-04-08 01:10:42.935888 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-08 01:10:42.935894 | orchestrator | Wednesday 08 April 2026 01:05:58 +0000 (0:00:03.504) 0:00:05.213 ******* 2026-04-08 01:10:42.935924 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-08 01:10:42.935931 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-08 01:10:42.935936 | orchestrator | 2026-04-08 01:10:42.935953 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-08 01:10:42.935960 | orchestrator | Wednesday 08 April 2026 01:06:05 +0000 (0:00:07.298) 0:00:12.512 ******* 2026-04-08 01:10:42.935967 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-08 01:10:42.935972 | orchestrator | 2026-04-08 01:10:42.935978 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-08 01:10:42.935984 | orchestrator | Wednesday 08 April 2026 01:06:08 +0000 (0:00:02.798) 0:00:15.310 ******* 2026-04-08 01:10:42.935990 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-08 01:10:42.935997 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-08 01:10:42.936003 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-08 01:10:42.936009 | orchestrator | 2026-04-08 01:10:42.936016 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-08 01:10:42.936020 | orchestrator | Wednesday 08 April 2026 01:06:16 +0000 (0:00:07.438) 0:00:22.749 ******* 2026-04-08 01:10:42.936024 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-08 01:10:42.936028 | orchestrator | 2026-04-08 01:10:42.936031 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-08 01:10:42.936035 | orchestrator | Wednesday 08 April 2026 01:06:19 +0000 (0:00:03.167) 0:00:25.916 ******* 2026-04-08 01:10:42.936039 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-08 01:10:42.936043 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-08 01:10:42.936047 | orchestrator | 2026-04-08 01:10:42.936050 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-08 01:10:42.936054 | orchestrator | Wednesday 08 April 2026 01:06:25 +0000 (0:00:06.680) 0:00:32.597 ******* 2026-04-08 01:10:42.936058 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-08 01:10:42.936062 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-08 01:10:42.936150 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-08 01:10:42.936155 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-08 01:10:42.936158 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-08 01:10:42.936162 | orchestrator | 2026-04-08 01:10:42.936166 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-08 01:10:42.936170 | orchestrator | Wednesday 08 April 2026 01:06:40 +0000 (0:00:14.694) 0:00:47.291 ******* 2026-04-08 01:10:42.936174 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:10:42.936178 | orchestrator | 2026-04-08 01:10:42.936195 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-08 01:10:42.936200 | orchestrator | Wednesday 08 April 2026 01:06:41 +0000 (0:00:00.695) 0:00:47.987 ******* 2026-04-08 01:10:42.936204 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.936209 | orchestrator | 2026-04-08 01:10:42.936212 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-08 01:10:42.936216 | orchestrator | Wednesday 08 April 2026 01:06:46 +0000 (0:00:04.842) 0:00:52.829 ******* 2026-04-08 01:10:42.936254 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.936285 | orchestrator | 2026-04-08 01:10:42.936291 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-08 01:10:42.936540 | orchestrator | Wednesday 08 April 2026 01:06:49 +0000 (0:00:03.777) 0:00:56.607 ******* 2026-04-08 01:10:42.936555 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:10:42.936561 | orchestrator | 2026-04-08 01:10:42.936568 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-08 01:10:42.936586 | orchestrator | Wednesday 08 April 2026 01:06:52 +0000 (0:00:02.677) 0:00:59.284 ******* 2026-04-08 01:10:42.936593 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-08 01:10:42.936599 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-08 01:10:42.936604 | orchestrator | 2026-04-08 01:10:42.936628 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-08 01:10:42.936635 | orchestrator | Wednesday 08 April 2026 01:07:02 +0000 (0:00:09.706) 0:01:08.990 ******* 2026-04-08 01:10:42.936649 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-08 01:10:42.936656 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-08 01:10:42.936665 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-08 01:10:42.936672 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-08 01:10:42.936678 | orchestrator | 2026-04-08 01:10:42.936684 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-08 01:10:42.936690 | orchestrator | Wednesday 08 April 2026 01:07:19 +0000 (0:00:17.388) 0:01:26.380 ******* 2026-04-08 01:10:42.936696 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.936702 | orchestrator | 2026-04-08 01:10:42.936708 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-08 01:10:42.936851 | orchestrator | Wednesday 08 April 2026 01:07:23 +0000 (0:00:04.246) 0:01:30.626 ******* 2026-04-08 01:10:42.936866 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.936872 | orchestrator | 2026-04-08 01:10:42.936879 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-08 01:10:42.936884 | orchestrator | Wednesday 08 April 2026 01:07:29 +0000 (0:00:05.617) 0:01:36.244 ******* 2026-04-08 01:10:42.936891 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:10:42.936897 | orchestrator | 2026-04-08 01:10:42.936920 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-08 01:10:42.936927 | orchestrator | Wednesday 08 April 2026 01:07:29 +0000 (0:00:00.187) 0:01:36.432 ******* 2026-04-08 01:10:42.936932 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:10:42.937160 | orchestrator | 2026-04-08 01:10:42.937174 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-08 01:10:42.937180 | orchestrator | Wednesday 08 April 2026 01:07:34 +0000 (0:00:04.954) 0:01:41.386 ******* 2026-04-08 01:10:42.937186 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:10:42.937191 | orchestrator | 2026-04-08 01:10:42.937197 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-08 01:10:42.937203 | orchestrator | Wednesday 08 April 2026 01:07:35 +0000 (0:00:00.800) 0:01:42.187 ******* 2026-04-08 01:10:42.937209 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.937215 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:10:42.937221 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:10:42.937226 | orchestrator | 2026-04-08 01:10:42.937232 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-08 01:10:42.937238 | orchestrator | Wednesday 08 April 2026 01:07:42 +0000 (0:00:06.611) 0:01:48.798 ******* 2026-04-08 01:10:42.937244 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:10:42.937249 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.937255 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:10:42.937261 | orchestrator | 2026-04-08 01:10:42.937266 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-08 01:10:42.937271 | orchestrator | Wednesday 08 April 2026 01:07:47 +0000 (0:00:05.412) 0:01:54.211 ******* 2026-04-08 01:10:42.937290 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.937295 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:10:42.937301 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:10:42.937306 | orchestrator | 2026-04-08 01:10:42.937312 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-08 01:10:42.937318 | orchestrator | Wednesday 08 April 2026 01:07:48 +0000 (0:00:00.799) 0:01:55.010 ******* 2026-04-08 01:10:42.937323 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:10:42.937328 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:10:42.937334 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:10:42.937339 | orchestrator | 2026-04-08 01:10:42.937345 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-08 01:10:42.937351 | orchestrator | Wednesday 08 April 2026 01:07:50 +0000 (0:00:02.204) 0:01:57.214 ******* 2026-04-08 01:10:42.937356 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:10:42.937362 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.937367 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:10:42.937373 | orchestrator | 2026-04-08 01:10:42.937379 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-08 01:10:42.937385 | orchestrator | Wednesday 08 April 2026 01:07:51 +0000 (0:00:01.376) 0:01:58.591 ******* 2026-04-08 01:10:42.937391 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.937397 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:10:42.937403 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:10:42.937408 | orchestrator | 2026-04-08 01:10:42.937414 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-08 01:10:42.937420 | orchestrator | Wednesday 08 April 2026 01:07:53 +0000 (0:00:01.202) 0:01:59.794 ******* 2026-04-08 01:10:42.937426 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:10:42.937431 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:10:42.937436 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.937442 | orchestrator | 2026-04-08 01:10:42.937487 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-08 01:10:42.937496 | orchestrator | Wednesday 08 April 2026 01:07:55 +0000 (0:00:02.470) 0:02:02.264 ******* 2026-04-08 01:10:42.937525 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:10:42.937531 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:10:42.937537 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.937542 | orchestrator | 2026-04-08 01:10:42.937549 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-08 01:10:42.937555 | orchestrator | Wednesday 08 April 2026 01:07:57 +0000 (0:00:01.742) 0:02:04.007 ******* 2026-04-08 01:10:42.937560 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:10:42.937566 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:10:42.937572 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:10:42.937578 | orchestrator | 2026-04-08 01:10:42.937584 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-08 01:10:42.937589 | orchestrator | Wednesday 08 April 2026 01:07:57 +0000 (0:00:00.662) 0:02:04.669 ******* 2026-04-08 01:10:42.937595 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:10:42.937600 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:10:42.937606 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:10:42.937611 | orchestrator | 2026-04-08 01:10:42.937618 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-08 01:10:42.937624 | orchestrator | Wednesday 08 April 2026 01:08:00 +0000 (0:00:02.563) 0:02:07.233 ******* 2026-04-08 01:10:42.937630 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:10:42.937636 | orchestrator | 2026-04-08 01:10:42.937643 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-08 01:10:42.937648 | orchestrator | Wednesday 08 April 2026 01:08:01 +0000 (0:00:00.681) 0:02:07.915 ******* 2026-04-08 01:10:42.937654 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:10:42.937659 | orchestrator | 2026-04-08 01:10:42.937665 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-08 01:10:42.937681 | orchestrator | Wednesday 08 April 2026 01:08:05 +0000 (0:00:04.326) 0:02:12.241 ******* 2026-04-08 01:10:42.937687 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:10:42.937693 | orchestrator | 2026-04-08 01:10:42.937699 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-08 01:10:42.937705 | orchestrator | Wednesday 08 April 2026 01:08:08 +0000 (0:00:03.477) 0:02:15.719 ******* 2026-04-08 01:10:42.937711 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-08 01:10:42.937723 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-08 01:10:42.937729 | orchestrator | 2026-04-08 01:10:42.937735 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-08 01:10:42.937742 | orchestrator | Wednesday 08 April 2026 01:08:17 +0000 (0:00:08.672) 0:02:24.392 ******* 2026-04-08 01:10:42.937748 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:10:42.937781 | orchestrator | 2026-04-08 01:10:42.937789 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-08 01:10:42.937794 | orchestrator | Wednesday 08 April 2026 01:08:20 +0000 (0:00:03.339) 0:02:27.732 ******* 2026-04-08 01:10:42.937800 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:10:42.937806 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:10:42.937813 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:10:42.937820 | orchestrator | 2026-04-08 01:10:42.937827 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-08 01:10:42.937834 | orchestrator | Wednesday 08 April 2026 01:08:21 +0000 (0:00:00.298) 0:02:28.030 ******* 2026-04-08 01:10:42.937846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 01:10:42.937898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 01:10:42.937907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 01:10:42.937922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-08 01:10:42.937935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-08 01:10:42.937941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-08 01:10:42.937949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.937955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.937984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.937993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938099 | orchestrator | 2026-04-08 01:10:42.938105 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-08 01:10:42.938112 | orchestrator | Wednesday 08 April 2026 01:08:24 +0000 (0:00:02.821) 0:02:30.852 ******* 2026-04-08 01:10:42.938119 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:10:42.938125 | orchestrator | 2026-04-08 01:10:42.938154 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-08 01:10:42.938170 | orchestrator | Wednesday 08 April 2026 01:08:24 +0000 (0:00:00.133) 0:02:30.985 ******* 2026-04-08 01:10:42.938176 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:10:42.938182 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:10:42.938188 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:10:42.938194 | orchestrator | 2026-04-08 01:10:42.938200 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-08 01:10:42.938206 | orchestrator | Wednesday 08 April 2026 01:08:24 +0000 (0:00:00.283) 0:02:31.269 ******* 2026-04-08 01:10:42.938214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-08 01:10:42.938233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 01:10:42.938238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:10:42.938250 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:10:42.938271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-08 01:10:42.938280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 01:10:42.938284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:10:42.938302 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:10:42.938306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-08 01:10:42.938328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 01:10:42.938332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:10:42.938347 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:10:42.938351 | orchestrator | 2026-04-08 01:10:42.938354 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-08 01:10:42.938358 | orchestrator | Wednesday 08 April 2026 01:08:25 +0000 (0:00:00.615) 0:02:31.884 ******* 2026-04-08 01:10:42.938362 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:10:42.938366 | orchestrator | 2026-04-08 01:10:42.938370 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-08 01:10:42.938374 | orchestrator | Wednesday 08 April 2026 01:08:25 +0000 (0:00:00.580) 0:02:32.464 ******* 2026-04-08 01:10:42.938377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 01:10:42.938403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 01:10:42.938408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 01:10:42.938415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-08 01:10:42.938419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-08 01:10:42.938423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-08 01:10:42.938427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938484 | orchestrator | 2026-04-08 01:10:42.938488 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-08 01:10:42.938492 | orchestrator | Wednesday 08 April 2026 01:08:30 +0000 (0:00:05.272) 0:02:37.737 ******* 2026-04-08 01:10:42.938496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-08 01:10:42.938500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 01:10:42.938507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:10:42.938523 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:10:42.938531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-08 01:10:42.938535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 01:10:42.938539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:10:42.938557 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:10:42.938561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-08 01:10:42.938566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 01:10:42.938572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:10:42.938587 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:10:42.938591 | orchestrator | 2026-04-08 01:10:42.938594 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-08 01:10:42.938598 | orchestrator | Wednesday 08 April 2026 01:08:31 +0000 (0:00:00.642) 0:02:38.379 ******* 2026-04-08 01:10:42.938602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-08 01:10:42.938609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 01:10:42.938613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:10:42.938628 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:10:42.938634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-08 01:10:42.938642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 01:10:42.938646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:10:42.938662 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:10:42.938666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-08 01:10:42.938672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 01:10:42.938680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 01:10:42.938688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 01:10:42.938692 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:10:42.938696 | orchestrator | 2026-04-08 01:10:42.938699 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-08 01:10:42.938703 | orchestrator | Wednesday 08 April 2026 01:08:32 +0000 (0:00:01.024) 0:02:39.404 ******* 2026-04-08 01:10:42.938711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 01:10:42.938715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 01:10:42.938725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 01:10:42.938729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-08 01:10:42.938733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-08 01:10:42.938737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-08 01:10:42.938744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938831 | orchestrator | 2026-04-08 01:10:42.938835 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-08 01:10:42.938839 | orchestrator | Wednesday 08 April 2026 01:08:38 +0000 (0:00:05.368) 0:02:44.772 ******* 2026-04-08 01:10:42.938843 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-08 01:10:42.938848 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-08 01:10:42.938853 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-08 01:10:42.938856 | orchestrator | 2026-04-08 01:10:42.938860 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-08 01:10:42.938864 | orchestrator | Wednesday 08 April 2026 01:08:39 +0000 (0:00:01.675) 0:02:46.448 ******* 2026-04-08 01:10:42.938869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 01:10:42.938873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 01:10:42.938881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 01:10:42.938907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-08 01:10:42.938915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-08 01:10:42.938921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-08 01:10:42.938926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:10:42.938969 | orchestrator | 2026-04-08 01:10:42.938972 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-08 01:10:42.938976 | orchestrator | Wednesday 08 April 2026 01:08:56 +0000 (0:00:16.525) 0:03:02.973 ******* 2026-04-08 01:10:42.938980 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.938984 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:10:42.938988 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:10:42.938992 | orchestrator | 2026-04-08 01:10:42.938995 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-08 01:10:42.938999 | orchestrator | Wednesday 08 April 2026 01:08:58 +0000 (0:00:01.963) 0:03:04.936 ******* 2026-04-08 01:10:42.939003 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-08 01:10:42.939007 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-08 01:10:42.939013 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-08 01:10:42.939020 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-08 01:10:42.939024 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-08 01:10:42.939027 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-08 01:10:42.939031 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-08 01:10:42.939035 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-08 01:10:42.939039 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-08 01:10:42.939043 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-08 01:10:42.939046 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-08 01:10:42.939050 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-08 01:10:42.939054 | orchestrator | 2026-04-08 01:10:42.939058 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-08 01:10:42.939061 | orchestrator | Wednesday 08 April 2026 01:09:03 +0000 (0:00:04.980) 0:03:09.917 ******* 2026-04-08 01:10:42.939065 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-08 01:10:42.939069 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-08 01:10:42.939073 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-08 01:10:42.939076 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-08 01:10:42.939080 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-08 01:10:42.939084 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-08 01:10:42.939087 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-08 01:10:42.939091 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-08 01:10:42.939095 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-08 01:10:42.939098 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-08 01:10:42.939102 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-08 01:10:42.939106 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-08 01:10:42.939110 | orchestrator | 2026-04-08 01:10:42.939113 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-08 01:10:42.939120 | orchestrator | Wednesday 08 April 2026 01:09:08 +0000 (0:00:05.105) 0:03:15.023 ******* 2026-04-08 01:10:42.939124 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-08 01:10:42.939128 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-08 01:10:42.939132 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-08 01:10:42.939135 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-08 01:10:42.939139 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-08 01:10:42.939143 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-08 01:10:42.939147 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-08 01:10:42.939150 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-08 01:10:42.939154 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-08 01:10:42.939158 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-08 01:10:42.939161 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-08 01:10:42.939165 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-08 01:10:42.939169 | orchestrator | 2026-04-08 01:10:42.939173 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-08 01:10:42.939176 | orchestrator | Wednesday 08 April 2026 01:09:13 +0000 (0:00:05.478) 0:03:20.501 ******* 2026-04-08 01:10:42.939180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 01:10:42.939193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 01:10:42.939197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 01:10:42.939203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-08 01:10:42.939208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-08 01:10:42.939211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-08 01:10:42.939219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.939225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.939229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.939233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.939240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.939244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-08 01:10:42.939251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:10:42.939255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:10:42.939263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-08 01:10:42.939267 | orchestrator | 2026-04-08 01:10:42.939271 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-08 01:10:42.939275 | orchestrator | Wednesday 08 April 2026 01:09:17 +0000 (0:00:03.753) 0:03:24.254 ******* 2026-04-08 01:10:42.939279 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:10:42.939283 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:10:42.939286 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:10:42.939290 | orchestrator | 2026-04-08 01:10:42.939294 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-08 01:10:42.939298 | orchestrator | Wednesday 08 April 2026 01:09:17 +0000 (0:00:00.462) 0:03:24.716 ******* 2026-04-08 01:10:42.939301 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.939305 | orchestrator | 2026-04-08 01:10:42.939309 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-08 01:10:42.939313 | orchestrator | Wednesday 08 April 2026 01:09:19 +0000 (0:00:02.016) 0:03:26.733 ******* 2026-04-08 01:10:42.939316 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.939320 | orchestrator | 2026-04-08 01:10:42.939324 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-08 01:10:42.939327 | orchestrator | Wednesday 08 April 2026 01:09:22 +0000 (0:00:02.017) 0:03:28.751 ******* 2026-04-08 01:10:42.939331 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.939335 | orchestrator | 2026-04-08 01:10:42.939339 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-08 01:10:42.939342 | orchestrator | Wednesday 08 April 2026 01:09:24 +0000 (0:00:02.094) 0:03:30.846 ******* 2026-04-08 01:10:42.939346 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.939350 | orchestrator | 2026-04-08 01:10:42.939353 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-08 01:10:42.939357 | orchestrator | Wednesday 08 April 2026 01:09:26 +0000 (0:00:02.050) 0:03:32.896 ******* 2026-04-08 01:10:42.939361 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.939365 | orchestrator | 2026-04-08 01:10:42.939368 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-08 01:10:42.939372 | orchestrator | Wednesday 08 April 2026 01:09:48 +0000 (0:00:22.326) 0:03:55.223 ******* 2026-04-08 01:10:42.939379 | orchestrator | 2026-04-08 01:10:42.939385 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-08 01:10:42.939389 | orchestrator | Wednesday 08 April 2026 01:09:48 +0000 (0:00:00.068) 0:03:55.291 ******* 2026-04-08 01:10:42.939393 | orchestrator | 2026-04-08 01:10:42.939397 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-08 01:10:42.939400 | orchestrator | Wednesday 08 April 2026 01:09:48 +0000 (0:00:00.064) 0:03:55.356 ******* 2026-04-08 01:10:42.939404 | orchestrator | 2026-04-08 01:10:42.939408 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-08 01:10:42.939412 | orchestrator | Wednesday 08 April 2026 01:09:48 +0000 (0:00:00.065) 0:03:55.421 ******* 2026-04-08 01:10:42.939415 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.939419 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:10:42.939423 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:10:42.939427 | orchestrator | 2026-04-08 01:10:42.939430 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-08 01:10:42.939434 | orchestrator | Wednesday 08 April 2026 01:10:04 +0000 (0:00:15.585) 0:04:11.006 ******* 2026-04-08 01:10:42.939438 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.939441 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:10:42.939445 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:10:42.939449 | orchestrator | 2026-04-08 01:10:42.939453 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-08 01:10:42.939456 | orchestrator | Wednesday 08 April 2026 01:10:16 +0000 (0:00:12.050) 0:04:23.056 ******* 2026-04-08 01:10:42.939460 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.939464 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:10:42.939468 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:10:42.939471 | orchestrator | 2026-04-08 01:10:42.939475 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-08 01:10:42.939479 | orchestrator | Wednesday 08 April 2026 01:10:26 +0000 (0:00:09.947) 0:04:33.004 ******* 2026-04-08 01:10:42.939483 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.939486 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:10:42.939490 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:10:42.939494 | orchestrator | 2026-04-08 01:10:42.939497 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-08 01:10:42.939501 | orchestrator | Wednesday 08 April 2026 01:10:31 +0000 (0:00:05.427) 0:04:38.432 ******* 2026-04-08 01:10:42.939505 | orchestrator | changed: [testbed-node-1] 2026-04-08 01:10:42.939509 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:10:42.939512 | orchestrator | changed: [testbed-node-2] 2026-04-08 01:10:42.939516 | orchestrator | 2026-04-08 01:10:42.939520 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:10:42.939524 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 01:10:42.939528 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-08 01:10:42.939532 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-08 01:10:42.939535 | orchestrator | 2026-04-08 01:10:42.939539 | orchestrator | 2026-04-08 01:10:42.939543 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:10:42.939547 | orchestrator | Wednesday 08 April 2026 01:10:42 +0000 (0:00:10.867) 0:04:49.300 ******* 2026-04-08 01:10:42.939553 | orchestrator | =============================================================================== 2026-04-08 01:10:42.939557 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.33s 2026-04-08 01:10:42.939560 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.39s 2026-04-08 01:10:42.939564 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.53s 2026-04-08 01:10:42.939571 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.59s 2026-04-08 01:10:42.939574 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.69s 2026-04-08 01:10:42.939578 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 12.05s 2026-04-08 01:10:42.939582 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.87s 2026-04-08 01:10:42.939586 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 9.95s 2026-04-08 01:10:42.939589 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.71s 2026-04-08 01:10:42.939593 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.67s 2026-04-08 01:10:42.939597 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.44s 2026-04-08 01:10:42.939601 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.30s 2026-04-08 01:10:42.939604 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.68s 2026-04-08 01:10:42.939608 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.61s 2026-04-08 01:10:42.939612 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.62s 2026-04-08 01:10:42.939616 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.48s 2026-04-08 01:10:42.939619 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.43s 2026-04-08 01:10:42.939623 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.41s 2026-04-08 01:10:42.939627 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.37s 2026-04-08 01:10:42.939631 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.27s 2026-04-08 01:10:42.939637 | orchestrator | 2026-04-08 01:10:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:10:45.981529 | orchestrator | 2026-04-08 01:10:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:10:49.026069 | orchestrator | 2026-04-08 01:10:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:10:52.071783 | orchestrator | 2026-04-08 01:10:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:10:55.119471 | orchestrator | 2026-04-08 01:10:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:10:58.165470 | orchestrator | 2026-04-08 01:10:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:11:01.208292 | orchestrator | 2026-04-08 01:11:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:11:04.252022 | orchestrator | 2026-04-08 01:11:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:11:07.294961 | orchestrator | 2026-04-08 01:11:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:11:10.335907 | orchestrator | 2026-04-08 01:11:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:11:13.376233 | orchestrator | 2026-04-08 01:11:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:11:16.426578 | orchestrator | 2026-04-08 01:11:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:11:19.471960 | orchestrator | 2026-04-08 01:11:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:11:22.521276 | orchestrator | 2026-04-08 01:11:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:11:25.568284 | orchestrator | 2026-04-08 01:11:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:11:28.614114 | orchestrator | 2026-04-08 01:11:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:11:31.662101 | orchestrator | 2026-04-08 01:11:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:11:34.708401 | orchestrator | 2026-04-08 01:11:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:11:37.755277 | orchestrator | 2026-04-08 01:11:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:11:40.799905 | orchestrator | 2026-04-08 01:11:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-08 01:11:43.846231 | orchestrator | 2026-04-08 01:11:44.051579 | orchestrator | 2026-04-08 01:11:44.057418 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Apr 8 01:11:44 UTC 2026 2026-04-08 01:11:44.057485 | orchestrator | 2026-04-08 01:11:44.431710 | orchestrator | ok: Runtime: 0:32:40.666136 2026-04-08 01:11:44.683520 | 2026-04-08 01:11:44.683679 | TASK [Bootstrap services] 2026-04-08 01:11:45.457692 | orchestrator | 2026-04-08 01:11:45.457881 | orchestrator | # BOOTSTRAP 2026-04-08 01:11:45.457896 | orchestrator | 2026-04-08 01:11:45.457905 | orchestrator | + set -e 2026-04-08 01:11:45.457914 | orchestrator | + echo 2026-04-08 01:11:45.457922 | orchestrator | + echo '# BOOTSTRAP' 2026-04-08 01:11:45.457933 | orchestrator | + echo 2026-04-08 01:11:45.457965 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-08 01:11:45.467428 | orchestrator | + set -e 2026-04-08 01:11:45.467510 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-08 01:11:50.510169 | orchestrator | 2026-04-08 01:11:50 | INFO  | It takes a moment until task 8da5fa1c-f493-4398-b228-3d801c52abc8 (flavor-manager) has been started and output is visible here. 2026-04-08 01:12:00.157013 | orchestrator | 2026-04-08 01:11:55 | INFO  | Flavor SCS-1L-1 created 2026-04-08 01:12:00.157106 | orchestrator | 2026-04-08 01:11:55 | INFO  | Flavor SCS-1L-1-5 created 2026-04-08 01:12:00.157115 | orchestrator | 2026-04-08 01:11:55 | INFO  | Flavor SCS-1V-2 created 2026-04-08 01:12:00.157120 | orchestrator | 2026-04-08 01:11:56 | INFO  | Flavor SCS-1V-2-5 created 2026-04-08 01:12:00.157124 | orchestrator | 2026-04-08 01:11:56 | INFO  | Flavor SCS-1V-4 created 2026-04-08 01:12:00.157128 | orchestrator | 2026-04-08 01:11:56 | INFO  | Flavor SCS-1V-4-10 created 2026-04-08 01:12:00.157133 | orchestrator | 2026-04-08 01:11:56 | INFO  | Flavor SCS-1V-8 created 2026-04-08 01:12:00.157137 | orchestrator | 2026-04-08 01:11:56 | INFO  | Flavor SCS-1V-8-20 created 2026-04-08 01:12:00.157150 | orchestrator | 2026-04-08 01:11:56 | INFO  | Flavor SCS-2V-4 created 2026-04-08 01:12:00.157154 | orchestrator | 2026-04-08 01:11:56 | INFO  | Flavor SCS-2V-4-10 created 2026-04-08 01:12:00.157158 | orchestrator | 2026-04-08 01:11:56 | INFO  | Flavor SCS-2V-8 created 2026-04-08 01:12:00.157162 | orchestrator | 2026-04-08 01:11:57 | INFO  | Flavor SCS-2V-8-20 created 2026-04-08 01:12:00.157166 | orchestrator | 2026-04-08 01:11:57 | INFO  | Flavor SCS-2V-16 created 2026-04-08 01:12:00.157170 | orchestrator | 2026-04-08 01:11:57 | INFO  | Flavor SCS-2V-16-50 created 2026-04-08 01:12:00.157174 | orchestrator | 2026-04-08 01:11:57 | INFO  | Flavor SCS-4V-8 created 2026-04-08 01:12:00.157178 | orchestrator | 2026-04-08 01:11:57 | INFO  | Flavor SCS-4V-8-20 created 2026-04-08 01:12:00.157182 | orchestrator | 2026-04-08 01:11:57 | INFO  | Flavor SCS-4V-16 created 2026-04-08 01:12:00.157185 | orchestrator | 2026-04-08 01:11:57 | INFO  | Flavor SCS-4V-16-50 created 2026-04-08 01:12:00.157190 | orchestrator | 2026-04-08 01:11:58 | INFO  | Flavor SCS-4V-32 created 2026-04-08 01:12:00.157194 | orchestrator | 2026-04-08 01:11:58 | INFO  | Flavor SCS-4V-32-100 created 2026-04-08 01:12:00.157197 | orchestrator | 2026-04-08 01:11:58 | INFO  | Flavor SCS-8V-16 created 2026-04-08 01:12:00.157201 | orchestrator | 2026-04-08 01:11:58 | INFO  | Flavor SCS-8V-16-50 created 2026-04-08 01:12:00.157205 | orchestrator | 2026-04-08 01:11:58 | INFO  | Flavor SCS-8V-32 created 2026-04-08 01:12:00.157209 | orchestrator | 2026-04-08 01:11:58 | INFO  | Flavor SCS-8V-32-100 created 2026-04-08 01:12:00.157213 | orchestrator | 2026-04-08 01:11:58 | INFO  | Flavor SCS-16V-32 created 2026-04-08 01:12:00.157217 | orchestrator | 2026-04-08 01:11:59 | INFO  | Flavor SCS-16V-32-100 created 2026-04-08 01:12:00.157221 | orchestrator | 2026-04-08 01:11:59 | INFO  | Flavor SCS-2V-4-20s created 2026-04-08 01:12:00.157225 | orchestrator | 2026-04-08 01:11:59 | INFO  | Flavor SCS-4V-8-50s created 2026-04-08 01:12:00.157228 | orchestrator | 2026-04-08 01:11:59 | INFO  | Flavor SCS-4V-16-100s created 2026-04-08 01:12:00.157232 | orchestrator | 2026-04-08 01:11:59 | INFO  | Flavor SCS-8V-32-100s created 2026-04-08 01:12:01.711917 | orchestrator | 2026-04-08 01:12:01 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-08 01:12:11.757549 | orchestrator | 2026-04-08 01:12:11 | INFO  | Prepare task for execution of bootstrap-basic. 2026-04-08 01:12:11.848156 | orchestrator | 2026-04-08 01:12:11 | INFO  | Task 99e26900-0f4f-4d48-b5f9-e88967e433e5 (bootstrap-basic) was prepared for execution. 2026-04-08 01:12:11.848257 | orchestrator | 2026-04-08 01:12:11 | INFO  | It takes a moment until task 99e26900-0f4f-4d48-b5f9-e88967e433e5 (bootstrap-basic) has been started and output is visible here. 2026-04-08 01:13:00.197165 | orchestrator | 2026-04-08 01:13:00.197262 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-08 01:13:00.197272 | orchestrator | 2026-04-08 01:13:00.197279 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 01:13:00.197286 | orchestrator | Wednesday 08 April 2026 01:12:15 +0000 (0:00:00.106) 0:00:00.106 ******* 2026-04-08 01:13:00.197292 | orchestrator | ok: [localhost] 2026-04-08 01:13:00.197299 | orchestrator | 2026-04-08 01:13:00.197306 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-08 01:13:00.197312 | orchestrator | Wednesday 08 April 2026 01:12:17 +0000 (0:00:01.997) 0:00:02.104 ******* 2026-04-08 01:13:00.197320 | orchestrator | ok: [localhost] 2026-04-08 01:13:00.197325 | orchestrator | 2026-04-08 01:13:00.197331 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-08 01:13:00.197337 | orchestrator | Wednesday 08 April 2026 01:12:27 +0000 (0:00:09.815) 0:00:11.920 ******* 2026-04-08 01:13:00.197343 | orchestrator | changed: [localhost] 2026-04-08 01:13:00.197350 | orchestrator | 2026-04-08 01:13:00.197356 | orchestrator | TASK [Create public network] *************************************************** 2026-04-08 01:13:00.197362 | orchestrator | Wednesday 08 April 2026 01:12:35 +0000 (0:00:08.217) 0:00:20.137 ******* 2026-04-08 01:13:00.197368 | orchestrator | changed: [localhost] 2026-04-08 01:13:00.197374 | orchestrator | 2026-04-08 01:13:00.197383 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-08 01:13:00.197389 | orchestrator | Wednesday 08 April 2026 01:12:40 +0000 (0:00:05.274) 0:00:25.412 ******* 2026-04-08 01:13:00.197395 | orchestrator | changed: [localhost] 2026-04-08 01:13:00.197401 | orchestrator | 2026-04-08 01:13:00.197407 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-08 01:13:00.197412 | orchestrator | Wednesday 08 April 2026 01:12:47 +0000 (0:00:07.357) 0:00:32.769 ******* 2026-04-08 01:13:00.197418 | orchestrator | changed: [localhost] 2026-04-08 01:13:00.197424 | orchestrator | 2026-04-08 01:13:00.197430 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-08 01:13:00.197436 | orchestrator | Wednesday 08 April 2026 01:12:52 +0000 (0:00:04.545) 0:00:37.315 ******* 2026-04-08 01:13:00.197441 | orchestrator | changed: [localhost] 2026-04-08 01:13:00.197447 | orchestrator | 2026-04-08 01:13:00.197453 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-08 01:13:00.197467 | orchestrator | Wednesday 08 April 2026 01:12:56 +0000 (0:00:03.839) 0:00:41.154 ******* 2026-04-08 01:13:00.197473 | orchestrator | ok: [localhost] 2026-04-08 01:13:00.197479 | orchestrator | 2026-04-08 01:13:00.197485 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:13:00.197491 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 01:13:00.197498 | orchestrator | 2026-04-08 01:13:00.197504 | orchestrator | 2026-04-08 01:13:00.197510 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:13:00.197516 | orchestrator | Wednesday 08 April 2026 01:12:59 +0000 (0:00:03.649) 0:00:44.803 ******* 2026-04-08 01:13:00.197521 | orchestrator | =============================================================================== 2026-04-08 01:13:00.197527 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.82s 2026-04-08 01:13:00.197553 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.22s 2026-04-08 01:13:00.197559 | orchestrator | Set public network to default ------------------------------------------- 7.36s 2026-04-08 01:13:00.197565 | orchestrator | Create public network --------------------------------------------------- 5.27s 2026-04-08 01:13:00.197582 | orchestrator | Create public subnet ---------------------------------------------------- 4.55s 2026-04-08 01:13:00.197588 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.84s 2026-04-08 01:13:00.197601 | orchestrator | Create manager role ----------------------------------------------------- 3.65s 2026-04-08 01:13:00.197607 | orchestrator | Gathering Facts --------------------------------------------------------- 2.00s 2026-04-08 01:13:02.346071 | orchestrator | 2026-04-08 01:13:02 | INFO  | It takes a moment until task a2ac9ee2-9ef0-41b6-b103-b614c1b005d2 (image-manager) has been started and output is visible here. 2026-04-08 01:13:43.487384 | orchestrator | 2026-04-08 01:13:04 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-08 01:13:43.487464 | orchestrator | 2026-04-08 01:13:05 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-08 01:13:43.487473 | orchestrator | 2026-04-08 01:13:05 | INFO  | Importing image Cirros 0.6.2 2026-04-08 01:13:43.487478 | orchestrator | 2026-04-08 01:13:05 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-08 01:13:43.487484 | orchestrator | 2026-04-08 01:13:07 | INFO  | Waiting for image to leave queued state... 2026-04-08 01:13:43.487490 | orchestrator | 2026-04-08 01:13:09 | INFO  | Waiting for import to complete... 2026-04-08 01:13:43.487495 | orchestrator | 2026-04-08 01:13:19 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-08 01:13:43.487500 | orchestrator | 2026-04-08 01:13:19 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-08 01:13:43.487505 | orchestrator | 2026-04-08 01:13:19 | INFO  | Setting internal_version = 0.6.2 2026-04-08 01:13:43.487509 | orchestrator | 2026-04-08 01:13:19 | INFO  | Setting image_original_user = cirros 2026-04-08 01:13:43.487514 | orchestrator | 2026-04-08 01:13:19 | INFO  | Adding tag os:cirros 2026-04-08 01:13:43.487518 | orchestrator | 2026-04-08 01:13:20 | INFO  | Setting property architecture: x86_64 2026-04-08 01:13:43.487523 | orchestrator | 2026-04-08 01:13:20 | INFO  | Setting property hw_disk_bus: scsi 2026-04-08 01:13:43.487527 | orchestrator | 2026-04-08 01:13:20 | INFO  | Setting property hw_rng_model: virtio 2026-04-08 01:13:43.487532 | orchestrator | 2026-04-08 01:13:20 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-08 01:13:43.487536 | orchestrator | 2026-04-08 01:13:20 | INFO  | Setting property hw_watchdog_action: reset 2026-04-08 01:13:43.487540 | orchestrator | 2026-04-08 01:13:21 | INFO  | Setting property hypervisor_type: qemu 2026-04-08 01:13:43.487551 | orchestrator | 2026-04-08 01:13:21 | INFO  | Setting property os_distro: cirros 2026-04-08 01:13:43.487556 | orchestrator | 2026-04-08 01:13:21 | INFO  | Setting property os_purpose: minimal 2026-04-08 01:13:43.487563 | orchestrator | 2026-04-08 01:13:21 | INFO  | Setting property replace_frequency: never 2026-04-08 01:13:43.487570 | orchestrator | 2026-04-08 01:13:21 | INFO  | Setting property uuid_validity: none 2026-04-08 01:13:43.487576 | orchestrator | 2026-04-08 01:13:22 | INFO  | Setting property provided_until: none 2026-04-08 01:13:43.487583 | orchestrator | 2026-04-08 01:13:22 | INFO  | Setting property image_description: Cirros 2026-04-08 01:13:43.487590 | orchestrator | 2026-04-08 01:13:22 | INFO  | Setting property image_name: Cirros 2026-04-08 01:13:43.487662 | orchestrator | 2026-04-08 01:13:22 | INFO  | Setting property internal_version: 0.6.2 2026-04-08 01:13:43.487670 | orchestrator | 2026-04-08 01:13:23 | INFO  | Setting property image_original_user: cirros 2026-04-08 01:13:43.487676 | orchestrator | 2026-04-08 01:13:23 | INFO  | Setting property os_version: 0.6.2 2026-04-08 01:13:43.487683 | orchestrator | 2026-04-08 01:13:23 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-08 01:13:43.487692 | orchestrator | 2026-04-08 01:13:24 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-08 01:13:43.487698 | orchestrator | 2026-04-08 01:13:24 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-08 01:13:43.487704 | orchestrator | 2026-04-08 01:13:24 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-08 01:13:43.487714 | orchestrator | 2026-04-08 01:13:24 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-08 01:13:43.487721 | orchestrator | 2026-04-08 01:13:24 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-08 01:13:43.487728 | orchestrator | 2026-04-08 01:13:24 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-08 01:13:43.487734 | orchestrator | 2026-04-08 01:13:24 | INFO  | Importing image Cirros 0.6.3 2026-04-08 01:13:43.487740 | orchestrator | 2026-04-08 01:13:24 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-08 01:13:43.487746 | orchestrator | 2026-04-08 01:13:25 | INFO  | Waiting for image to leave queued state... 2026-04-08 01:13:43.487753 | orchestrator | 2026-04-08 01:13:27 | INFO  | Waiting for import to complete... 2026-04-08 01:13:43.487775 | orchestrator | 2026-04-08 01:13:37 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-08 01:13:43.487781 | orchestrator | 2026-04-08 01:13:38 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-08 01:13:43.487787 | orchestrator | 2026-04-08 01:13:38 | INFO  | Setting internal_version = 0.6.3 2026-04-08 01:13:43.487793 | orchestrator | 2026-04-08 01:13:38 | INFO  | Setting image_original_user = cirros 2026-04-08 01:13:43.487799 | orchestrator | 2026-04-08 01:13:38 | INFO  | Adding tag os:cirros 2026-04-08 01:13:43.487804 | orchestrator | 2026-04-08 01:13:38 | INFO  | Setting property architecture: x86_64 2026-04-08 01:13:43.487810 | orchestrator | 2026-04-08 01:13:38 | INFO  | Setting property hw_disk_bus: scsi 2026-04-08 01:13:43.487816 | orchestrator | 2026-04-08 01:13:38 | INFO  | Setting property hw_rng_model: virtio 2026-04-08 01:13:43.487822 | orchestrator | 2026-04-08 01:13:38 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-08 01:13:43.487828 | orchestrator | 2026-04-08 01:13:39 | INFO  | Setting property hw_watchdog_action: reset 2026-04-08 01:13:43.487834 | orchestrator | 2026-04-08 01:13:39 | INFO  | Setting property hypervisor_type: qemu 2026-04-08 01:13:43.487839 | orchestrator | 2026-04-08 01:13:39 | INFO  | Setting property os_distro: cirros 2026-04-08 01:13:43.487845 | orchestrator | 2026-04-08 01:13:40 | INFO  | Setting property os_purpose: minimal 2026-04-08 01:13:43.487852 | orchestrator | 2026-04-08 01:13:40 | INFO  | Setting property replace_frequency: never 2026-04-08 01:13:43.487857 | orchestrator | 2026-04-08 01:13:40 | INFO  | Setting property uuid_validity: none 2026-04-08 01:13:43.487863 | orchestrator | 2026-04-08 01:13:40 | INFO  | Setting property provided_until: none 2026-04-08 01:13:43.487870 | orchestrator | 2026-04-08 01:13:40 | INFO  | Setting property image_description: Cirros 2026-04-08 01:13:43.487884 | orchestrator | 2026-04-08 01:13:41 | INFO  | Setting property image_name: Cirros 2026-04-08 01:13:43.487891 | orchestrator | 2026-04-08 01:13:41 | INFO  | Setting property internal_version: 0.6.3 2026-04-08 01:13:43.487897 | orchestrator | 2026-04-08 01:13:41 | INFO  | Setting property image_original_user: cirros 2026-04-08 01:13:43.487903 | orchestrator | 2026-04-08 01:13:41 | INFO  | Setting property os_version: 0.6.3 2026-04-08 01:13:43.487909 | orchestrator | 2026-04-08 01:13:42 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-08 01:13:43.487916 | orchestrator | 2026-04-08 01:13:42 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-08 01:13:43.487921 | orchestrator | 2026-04-08 01:13:42 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-08 01:13:43.487927 | orchestrator | 2026-04-08 01:13:42 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-08 01:13:43.487935 | orchestrator | 2026-04-08 01:13:42 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-08 01:13:43.754797 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-04-08 01:13:45.950336 | orchestrator | 2026-04-08 01:13:45 | INFO  | date: 2026-04-07 2026-04-08 01:13:45.950549 | orchestrator | 2026-04-08 01:13:45 | INFO  | image: octavia-amphora-haproxy-2024.2.20260407.qcow2 2026-04-08 01:13:45.950722 | orchestrator | 2026-04-08 01:13:45 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260407.qcow2 2026-04-08 01:13:45.950741 | orchestrator | 2026-04-08 01:13:45 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260407.qcow2.CHECKSUM 2026-04-08 01:13:46.167504 | orchestrator | 2026-04-08 01:13:46 | INFO  | checksum: c4f8130b9b88752cd3a30f3b2f025c70b2421aeafd1894491d662bda8fc15d00 2026-04-08 01:13:46.272449 | orchestrator | 2026-04-08 01:13:46 | INFO  | It takes a moment until task a3ce96ab-d923-46c5-8855-d37c93ba86dc (image-manager) has been started and output is visible here. 2026-04-08 01:14:47.512851 | orchestrator | 2026-04-08 01:13:48 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-07' 2026-04-08 01:14:47.512942 | orchestrator | 2026-04-08 01:13:48 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260407.qcow2: 200 2026-04-08 01:14:47.512957 | orchestrator | 2026-04-08 01:13:48 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-07 2026-04-08 01:14:47.512964 | orchestrator | 2026-04-08 01:13:48 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260407.qcow2 2026-04-08 01:14:47.512972 | orchestrator | 2026-04-08 01:13:50 | INFO  | Waiting for import to complete... 2026-04-08 01:14:47.512979 | orchestrator | 2026-04-08 01:14:00 | INFO  | Waiting for import to complete... 2026-04-08 01:14:47.512985 | orchestrator | 2026-04-08 01:14:11 | INFO  | Waiting for import to complete... 2026-04-08 01:14:47.512992 | orchestrator | 2026-04-08 01:14:21 | INFO  | Waiting for import to complete... 2026-04-08 01:14:47.512999 | orchestrator | 2026-04-08 01:14:31 | INFO  | Waiting for import to complete... 2026-04-08 01:14:47.513008 | orchestrator | 2026-04-08 01:14:41 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-07' successfully completed, reloading images 2026-04-08 01:14:47.513016 | orchestrator | 2026-04-08 01:14:41 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-07' 2026-04-08 01:14:47.513061 | orchestrator | 2026-04-08 01:14:41 | INFO  | Setting internal_version = 2026-04-07 2026-04-08 01:14:47.513069 | orchestrator | 2026-04-08 01:14:41 | INFO  | Setting image_original_user = ubuntu 2026-04-08 01:14:47.513075 | orchestrator | 2026-04-08 01:14:41 | INFO  | Adding tag amphora 2026-04-08 01:14:47.513089 | orchestrator | 2026-04-08 01:14:42 | INFO  | Adding tag os:ubuntu 2026-04-08 01:14:47.513095 | orchestrator | 2026-04-08 01:14:42 | INFO  | Setting property architecture: x86_64 2026-04-08 01:14:47.513100 | orchestrator | 2026-04-08 01:14:42 | INFO  | Setting property hw_disk_bus: scsi 2026-04-08 01:14:47.513106 | orchestrator | 2026-04-08 01:14:43 | INFO  | Setting property hw_rng_model: virtio 2026-04-08 01:14:47.513112 | orchestrator | 2026-04-08 01:14:43 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-08 01:14:47.513119 | orchestrator | 2026-04-08 01:14:43 | INFO  | Setting property hw_watchdog_action: reset 2026-04-08 01:14:47.513126 | orchestrator | 2026-04-08 01:14:43 | INFO  | Setting property hypervisor_type: qemu 2026-04-08 01:14:47.513132 | orchestrator | 2026-04-08 01:14:44 | INFO  | Setting property os_distro: ubuntu 2026-04-08 01:14:47.513138 | orchestrator | 2026-04-08 01:14:44 | INFO  | Setting property replace_frequency: quarterly 2026-04-08 01:14:47.513144 | orchestrator | 2026-04-08 01:14:44 | INFO  | Setting property uuid_validity: last-1 2026-04-08 01:14:47.513150 | orchestrator | 2026-04-08 01:14:44 | INFO  | Setting property provided_until: none 2026-04-08 01:14:47.513156 | orchestrator | 2026-04-08 01:14:45 | INFO  | Setting property os_purpose: network 2026-04-08 01:14:47.513162 | orchestrator | 2026-04-08 01:14:45 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-08 01:14:47.513169 | orchestrator | 2026-04-08 01:14:45 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-08 01:14:47.513175 | orchestrator | 2026-04-08 01:14:45 | INFO  | Setting property internal_version: 2026-04-07 2026-04-08 01:14:47.513195 | orchestrator | 2026-04-08 01:14:45 | INFO  | Setting property image_original_user: ubuntu 2026-04-08 01:14:47.513201 | orchestrator | 2026-04-08 01:14:46 | INFO  | Setting property os_version: 2026-04-07 2026-04-08 01:14:47.513207 | orchestrator | 2026-04-08 01:14:46 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260407.qcow2 2026-04-08 01:14:47.513213 | orchestrator | 2026-04-08 01:14:46 | INFO  | Setting property image_build_date: 2026-04-07 2026-04-08 01:14:47.513218 | orchestrator | 2026-04-08 01:14:46 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-07' 2026-04-08 01:14:47.513224 | orchestrator | 2026-04-08 01:14:46 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-07' 2026-04-08 01:14:47.513231 | orchestrator | 2026-04-08 01:14:47 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-08 01:14:47.513237 | orchestrator | 2026-04-08 01:14:47 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-08 01:14:47.513260 | orchestrator | 2026-04-08 01:14:47 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-08 01:14:47.513265 | orchestrator | 2026-04-08 01:14:47 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-08 01:14:47.894338 | orchestrator | ok: Runtime: 0:03:02.715812 2026-04-08 01:14:47.908526 | 2026-04-08 01:14:47.908652 | TASK [Run checks] 2026-04-08 01:14:48.653470 | orchestrator | + set -e 2026-04-08 01:14:48.653646 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-08 01:14:48.653658 | orchestrator | ++ export INTERACTIVE=false 2026-04-08 01:14:48.653667 | orchestrator | ++ INTERACTIVE=false 2026-04-08 01:14:48.653673 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-08 01:14:48.653678 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-08 01:14:48.653684 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-08 01:14:48.654895 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-08 01:14:48.661952 | orchestrator | 2026-04-08 01:14:48.662122 | orchestrator | # CHECK 2026-04-08 01:14:48.662135 | orchestrator | 2026-04-08 01:14:48.662144 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-08 01:14:48.662155 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-08 01:14:48.662162 | orchestrator | + echo 2026-04-08 01:14:48.662169 | orchestrator | + echo '# CHECK' 2026-04-08 01:14:48.662175 | orchestrator | + echo 2026-04-08 01:14:48.662187 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-08 01:14:48.663063 | orchestrator | ++ semver latest 5.0.0 2026-04-08 01:14:48.734738 | orchestrator | 2026-04-08 01:14:48.734832 | orchestrator | ## Containers @ testbed-manager 2026-04-08 01:14:48.734842 | orchestrator | 2026-04-08 01:14:48.734851 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-08 01:14:48.734857 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-08 01:14:48.734864 | orchestrator | + echo 2026-04-08 01:14:48.734871 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-08 01:14:48.734879 | orchestrator | + echo 2026-04-08 01:14:48.734885 | orchestrator | + osism container testbed-manager ps 2026-04-08 01:14:49.806394 | orchestrator | 2026-04-08 01:14:49 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-08 01:14:50.266337 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-08 01:14:50.266461 | orchestrator | 47933b299289 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2026-04-08 01:14:50.266486 | orchestrator | 8cbfc50377f2 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2026-04-08 01:14:50.266499 | orchestrator | 85452270ce7c registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-08 01:14:50.266505 | orchestrator | a97283ed97de registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-08 01:14:50.266516 | orchestrator | f5d7436217b9 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2026-04-08 01:14:50.266572 | orchestrator | 283bc915e743 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 16 minutes cephclient 2026-04-08 01:14:50.266580 | orchestrator | b97a9592aef0 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-08 01:14:50.266587 | orchestrator | c89f3fc63225 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-08 01:14:50.266619 | orchestrator | 4b58d7973695 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-08 01:14:50.266626 | orchestrator | db55a061e383 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 29 minutes (healthy) 80/tcp phpmyadmin 2026-04-08 01:14:50.266632 | orchestrator | 30212ac50172 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 30 minutes ago Up 29 minutes openstackclient 2026-04-08 01:14:50.266638 | orchestrator | efadfba3db69 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 30 minutes ago Up 30 minutes (healthy) 8080/tcp homer 2026-04-08 01:14:50.266644 | orchestrator | b209626ba01f registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-08 01:14:50.266650 | orchestrator | c4bf19e1d56c registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 57 minutes ago Up 36 minutes (healthy) manager-inventory_reconciler-1 2026-04-08 01:14:50.266656 | orchestrator | e45cc378dcd1 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 57 minutes ago Up 36 minutes (healthy) ceph-ansible 2026-04-08 01:14:50.266681 | orchestrator | b1d1b0d6fdc6 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 57 minutes ago Up 36 minutes (healthy) kolla-ansible 2026-04-08 01:14:50.266693 | orchestrator | 64683c87fdf1 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 57 minutes ago Up 36 minutes (healthy) osism-kubernetes 2026-04-08 01:14:50.266699 | orchestrator | 127cdd377a95 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 57 minutes ago Up 36 minutes (healthy) osism-ansible 2026-04-08 01:14:50.266705 | orchestrator | 8c02111fb64b registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 57 minutes ago Up 37 minutes (healthy) 8000/tcp manager-ara-server-1 2026-04-08 01:14:50.266711 | orchestrator | 509074a0ac8d registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-openstack-1 2026-04-08 01:14:50.266718 | orchestrator | 93f95d602b33 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-flower-1 2026-04-08 01:14:50.266723 | orchestrator | 9b57f28d9c07 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-listener-1 2026-04-08 01:14:50.266730 | orchestrator | 0bf37bafb669 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 57 minutes ago Up 37 minutes (healthy) 3306/tcp manager-mariadb-1 2026-04-08 01:14:50.266743 | orchestrator | 6888d06cd8d6 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-beat-1 2026-04-08 01:14:50.266750 | orchestrator | 6f505c611961 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-08 01:14:50.266757 | orchestrator | cd57a0a70466 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 57 minutes ago Up 37 minutes (healthy) 6379/tcp manager-redis-1 2026-04-08 01:14:50.266763 | orchestrator | 55f7e74c939c registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 57 minutes ago Up 37 minutes (healthy) osismclient 2026-04-08 01:14:50.266770 | orchestrator | f386c4f89070 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 57 minutes ago Up 37 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-08 01:14:50.266776 | orchestrator | 75d9b6747b9c registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 58 minutes ago Up 58 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-08 01:14:50.423499 | orchestrator | 2026-04-08 01:14:50.423615 | orchestrator | ## Images @ testbed-manager 2026-04-08 01:14:50.423623 | orchestrator | 2026-04-08 01:14:50.423628 | orchestrator | + echo 2026-04-08 01:14:50.423633 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-08 01:14:50.423638 | orchestrator | + echo 2026-04-08 01:14:50.423645 | orchestrator | + osism container testbed-manager images 2026-04-08 01:14:51.964303 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-08 01:14:51.964387 | orchestrator | registry.osism.tech/osism/osism-ansible latest a2426095683b About an hour ago 638MB 2026-04-08 01:14:51.964394 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 99f899f83c44 About an hour ago 636MB 2026-04-08 01:14:51.964399 | orchestrator | registry.osism.tech/osism/ceph-ansible reef a0a40963d0fd About an hour ago 585MB 2026-04-08 01:14:51.964404 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest acfb98c800cf About an hour ago 1.24GB 2026-04-08 01:14:51.964424 | orchestrator | registry.osism.tech/osism/osism latest f023a64759ad About an hour ago 407MB 2026-04-08 01:14:51.964429 | orchestrator | registry.osism.tech/osism/osism-frontend latest 3b714008b2b7 About an hour ago 212MB 2026-04-08 01:14:51.964433 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest d1987e471ae9 About an hour ago 357MB 2026-04-08 01:14:51.964437 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e5d0d6cbf841 4 hours ago 265MB 2026-04-08 01:14:51.964441 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 8ce11f7bb659 4 hours ago 579MB 2026-04-08 01:14:51.964445 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 515554dc651b 4 hours ago 668MB 2026-04-08 01:14:51.964449 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 a9406954b8bf 4 hours ago 839MB 2026-04-08 01:14:51.964453 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 64cc14a39653 4 hours ago 404MB 2026-04-08 01:14:51.964456 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 8aeaa2f3cf45 4 hours ago 357MB 2026-04-08 01:14:51.964460 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 bebb1ad2af41 4 hours ago 308MB 2026-04-08 01:14:51.964480 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 efbc78a1fee5 4 hours ago 306MB 2026-04-08 01:14:51.964484 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 e2211bec1290 21 hours ago 239MB 2026-04-08 01:14:51.964488 | orchestrator | registry.osism.tech/osism/cephclient reef a997f04c3d75 21 hours ago 453MB 2026-04-08 01:14:51.964492 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-08 01:14:51.964496 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-08 01:14:51.964499 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-08 01:14:51.964503 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 6 months ago 742MB 2026-04-08 01:14:51.964507 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-08 01:14:51.964511 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-08 01:14:51.964515 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-08 01:14:52.109753 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-08 01:14:52.110252 | orchestrator | ++ semver latest 5.0.0 2026-04-08 01:14:52.164283 | orchestrator | 2026-04-08 01:14:52.164373 | orchestrator | ## Containers @ testbed-node-0 2026-04-08 01:14:52.164381 | orchestrator | 2026-04-08 01:14:52.164386 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-08 01:14:52.164390 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-08 01:14:52.164394 | orchestrator | + echo 2026-04-08 01:14:52.164399 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-08 01:14:52.164404 | orchestrator | + echo 2026-04-08 01:14:52.164409 | orchestrator | + osism container testbed-node-0 ps 2026-04-08 01:14:53.599097 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-08 01:14:53.599188 | orchestrator | fa2f3acec606 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-08 01:14:53.599198 | orchestrator | 44da7d97c82d registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-08 01:14:53.599203 | orchestrator | 3d88543e9934 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-08 01:14:53.599208 | orchestrator | f755e987bccb registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-08 01:14:53.599212 | orchestrator | 032ba2b12b0f registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-04-08 01:14:53.599217 | orchestrator | e419b2dc36c2 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-08 01:14:53.599221 | orchestrator | a0932080ad86 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-04-08 01:14:53.599239 | orchestrator | 01c6f017466e registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-08 01:14:53.599244 | orchestrator | 02c13c69856a registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-08 01:14:53.599261 | orchestrator | 1929dbe8f221 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-08 01:14:53.599265 | orchestrator | e85fa8af41c3 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-08 01:14:53.599269 | orchestrator | 4a02153889f8 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) designate_worker 2026-04-08 01:14:53.599273 | orchestrator | 505b26ee528f registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-08 01:14:53.599277 | orchestrator | 0d10f348ecff registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-08 01:14:53.599280 | orchestrator | adf33c202acc registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-08 01:14:53.599284 | orchestrator | 8d5a173a2d85 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-08 01:14:53.599288 | orchestrator | 58373c93aee8 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-08 01:14:53.599292 | orchestrator | ff0ca980ab6b registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-08 01:14:53.599296 | orchestrator | 02dbb1b7fc87 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-08 01:14:53.599301 | orchestrator | ed0642ac7edc registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-08 01:14:53.599305 | orchestrator | 2a14109adfd2 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-08 01:14:53.599323 | orchestrator | c7389e785938 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-08 01:14:53.599327 | orchestrator | 41d4c934f7f1 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-08 01:14:53.599331 | orchestrator | d49d566ae8f2 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-04-08 01:14:53.599335 | orchestrator | 565b371911f9 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-04-08 01:14:53.599342 | orchestrator | d0afbb1b5e8f registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-08 01:14:53.599346 | orchestrator | f68ed620f6a5 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-08 01:14:53.599350 | orchestrator | ce9665493cdc registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-08 01:14:53.599359 | orchestrator | 3d1f8a0c1174 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-04-08 01:14:53.599367 | orchestrator | 4630e8d87962 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-08 01:14:53.599371 | orchestrator | 2a81ff8ef99d registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-04-08 01:14:53.599375 | orchestrator | 7537ae75de08 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-08 01:14:53.599379 | orchestrator | 7a77b96073be registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-08 01:14:53.599383 | orchestrator | 278f7c155605 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2026-04-08 01:14:53.599387 | orchestrator | 71f0208316c3 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-04-08 01:14:53.599393 | orchestrator | a1f41ef17ff1 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-04-08 01:14:53.599399 | orchestrator | 1f7ce33e9094 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-08 01:14:53.599406 | orchestrator | dabe5b0a73db registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-04-08 01:14:53.599415 | orchestrator | 7cae0331c861 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-04-08 01:14:53.599422 | orchestrator | 777527c4cad0 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-04-08 01:14:53.599428 | orchestrator | d2e4a8048d39 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-04-08 01:14:53.599434 | orchestrator | 1e810b32b541 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2026-04-08 01:14:53.599440 | orchestrator | bc5aa96041fb registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-04-08 01:14:53.599446 | orchestrator | 3af62a4c1fef registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-04-08 01:14:53.599458 | orchestrator | 6f121cf27410 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-08 01:14:53.599464 | orchestrator | 8e76b5911b13 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2026-04-08 01:14:53.599470 | orchestrator | e52852d8ded1 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2026-04-08 01:14:53.599476 | orchestrator | 5c14bc2f3d13 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2026-04-08 01:14:53.599487 | orchestrator | ca3b266c2e35 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-0 2026-04-08 01:14:53.599493 | orchestrator | 236d3847bcac registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-04-08 01:14:53.599499 | orchestrator | 2baa173b7751 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2026-04-08 01:14:53.599506 | orchestrator | eb232b390378 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-04-08 01:14:53.599512 | orchestrator | 2ff958cd4574 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2026-04-08 01:14:53.599537 | orchestrator | 3124d8f796f6 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2026-04-08 01:14:53.599546 | orchestrator | 51fa111cf003 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2026-04-08 01:14:53.599550 | orchestrator | 8ddb3ea052a4 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2026-04-08 01:14:53.599554 | orchestrator | e6db35588081 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-08 01:14:53.599558 | orchestrator | d9c4069d9798 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2026-04-08 01:14:53.599562 | orchestrator | 6e746a46ea4c registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-08 01:14:53.739470 | orchestrator | 2026-04-08 01:14:53.739607 | orchestrator | ## Images @ testbed-node-0 2026-04-08 01:14:53.739620 | orchestrator | 2026-04-08 01:14:53.739627 | orchestrator | + echo 2026-04-08 01:14:53.739635 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-08 01:14:53.739643 | orchestrator | + echo 2026-04-08 01:14:53.739650 | orchestrator | + osism container testbed-node-0 images 2026-04-08 01:14:55.226747 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-08 01:14:55.226827 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 e7e1000379ba 4 hours ago 1.56GB 2026-04-08 01:14:55.226834 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 e9fd03803182 4 hours ago 1.53GB 2026-04-08 01:14:55.226840 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 b4f3d996542f 4 hours ago 276MB 2026-04-08 01:14:55.226844 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e5d0d6cbf841 4 hours ago 265MB 2026-04-08 01:14:55.226848 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 f7da104fc809 4 hours ago 322MB 2026-04-08 01:14:55.226853 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 baaa05b258f9 4 hours ago 1.03GB 2026-04-08 01:14:55.226857 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 c77181aa2654 4 hours ago 274MB 2026-04-08 01:14:55.226861 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d1393b5d2d13 4 hours ago 411MB 2026-04-08 01:14:55.226865 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 8ce11f7bb659 4 hours ago 579MB 2026-04-08 01:14:55.226869 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 515554dc651b 4 hours ago 668MB 2026-04-08 01:14:55.226890 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 e99d8a4b6918 4 hours ago 266MB 2026-04-08 01:14:55.226895 | orchestrator | registry.osism.tech/kolla/redis 2024.2 1cc68fc22173 4 hours ago 273MB 2026-04-08 01:14:55.226898 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 fb766a2342f7 4 hours ago 273MB 2026-04-08 01:14:55.226902 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 01338ec37520 4 hours ago 1.15GB 2026-04-08 01:14:55.226906 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 9074a14f92af 4 hours ago 452MB 2026-04-08 01:14:55.226910 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 8aeaa2f3cf45 4 hours ago 357MB 2026-04-08 01:14:55.226926 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 cea843dcad68 4 hours ago 298MB 2026-04-08 01:14:55.226930 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 59a26bbc0b5c 4 hours ago 292MB 2026-04-08 01:14:55.226934 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 f04feaa9bd71 4 hours ago 301MB 2026-04-08 01:14:55.226938 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 efbc78a1fee5 4 hours ago 306MB 2026-04-08 01:14:55.226942 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 0ee1cb629284 4 hours ago 279MB 2026-04-08 01:14:55.226946 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 35cd0cb7d412 4 hours ago 279MB 2026-04-08 01:14:55.226950 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 b424ae9dbfc8 4 hours ago 975MB 2026-04-08 01:14:55.226954 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 0bd9f8fbf55b 4 hours ago 1.4GB 2026-04-08 01:14:55.226958 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 c5cf25e387a9 4 hours ago 1.41GB 2026-04-08 01:14:55.226961 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 fc4f10925ed0 4 hours ago 1.41GB 2026-04-08 01:14:55.226965 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 a6f266c11261 4 hours ago 1.72GB 2026-04-08 01:14:55.226969 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 e29e4c213d69 4 hours ago 990MB 2026-04-08 01:14:55.226973 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 88b36f22b601 4 hours ago 991MB 2026-04-08 01:14:55.226977 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 13c85d4c89eb 4 hours ago 991MB 2026-04-08 01:14:55.226981 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 4e796c849388 4 hours ago 1.16GB 2026-04-08 01:14:55.226985 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 fe55703a989d 4 hours ago 1.04GB 2026-04-08 01:14:55.226989 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 82a961f19358 4 hours ago 1.04GB 2026-04-08 01:14:55.226993 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 d316b7d10a35 4 hours ago 1.07GB 2026-04-08 01:14:55.226996 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 301ddcb72125 4 hours ago 1.13GB 2026-04-08 01:14:55.227000 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 d6c71219c8af 4 hours ago 1.24GB 2026-04-08 01:14:55.227019 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 cd138a3ceef5 4 hours ago 976MB 2026-04-08 01:14:55.227024 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 703e19f20451 4 hours ago 975MB 2026-04-08 01:14:55.227028 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 926bf7bd7183 4 hours ago 1.03GB 2026-04-08 01:14:55.227032 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 0a84f1ed4d84 4 hours ago 1.05GB 2026-04-08 01:14:55.227040 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 809c42c79e68 4 hours ago 1.03GB 2026-04-08 01:14:55.227044 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 92ed9b8c34b0 4 hours ago 1.05GB 2026-04-08 01:14:55.227048 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 f365649c6be2 4 hours ago 1.03GB 2026-04-08 01:14:55.227052 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 463d0a2329dc 4 hours ago 1.1GB 2026-04-08 01:14:55.227055 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 e82b8dba8efd 4 hours ago 989MB 2026-04-08 01:14:55.227062 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 8bbacb4b241d 4 hours ago 983MB 2026-04-08 01:14:55.227067 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 3e388f7070d2 4 hours ago 984MB 2026-04-08 01:14:55.227070 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 840f84772dfa 4 hours ago 984MB 2026-04-08 01:14:55.227074 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 23ad79161a59 4 hours ago 989MB 2026-04-08 01:14:55.227078 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 2b840465e874 4 hours ago 984MB 2026-04-08 01:14:55.227082 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 6159f00e0617 4 hours ago 990MB 2026-04-08 01:14:55.227086 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 fc4047af8774 4 hours ago 1.05GB 2026-04-08 01:14:55.227090 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 3839d6eb0a5f 4 hours ago 974MB 2026-04-08 01:14:55.227094 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 89c47101f144 4 hours ago 974MB 2026-04-08 01:14:55.227098 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 1bdbebc695d9 4 hours ago 974MB 2026-04-08 01:14:55.227102 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 261b598fd6f3 4 hours ago 973MB 2026-04-08 01:14:55.227106 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 2c07c0b95371 4 hours ago 1.21GB 2026-04-08 01:14:55.227110 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 fe29168ba70c 4 hours ago 1.37GB 2026-04-08 01:14:55.227114 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7310f3d05dba 4 hours ago 1.21GB 2026-04-08 01:14:55.227118 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 a947ca672a9a 4 hours ago 1.21GB 2026-04-08 01:14:55.227122 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 107ea1b8d299 4 hours ago 840MB 2026-04-08 01:14:55.227126 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 f85c55370419 4 hours ago 840MB 2026-04-08 01:14:55.227130 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 d7cfc4f8b643 4 hours ago 840MB 2026-04-08 01:14:55.227134 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 5337b2234a85 4 hours ago 840MB 2026-04-08 01:14:55.227138 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 01985efead8e 21 hours ago 1.35GB 2026-04-08 01:14:55.367416 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-08 01:14:55.367709 | orchestrator | ++ semver latest 5.0.0 2026-04-08 01:14:55.436723 | orchestrator | 2026-04-08 01:14:55.436815 | orchestrator | ## Containers @ testbed-node-1 2026-04-08 01:14:55.436829 | orchestrator | 2026-04-08 01:14:55.436841 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-08 01:14:55.436849 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-08 01:14:55.436856 | orchestrator | + echo 2026-04-08 01:14:55.436864 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-08 01:14:55.436872 | orchestrator | + echo 2026-04-08 01:14:55.436880 | orchestrator | + osism container testbed-node-1 ps 2026-04-08 01:14:56.934091 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-08 01:14:56.934179 | orchestrator | d0cf9799bba2 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-08 01:14:56.934948 | orchestrator | ce8469ea595d registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-08 01:14:56.934986 | orchestrator | 4e6c0a207e84 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-08 01:14:56.934993 | orchestrator | 2b10ebd76191 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-08 01:14:56.935000 | orchestrator | 2f6d29636ae7 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-08 01:14:56.935025 | orchestrator | d2d81d68193b registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-08 01:14:56.935033 | orchestrator | 931eca4ed1a3 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-08 01:14:56.935039 | orchestrator | ace029db14f8 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-08 01:14:56.935049 | orchestrator | 9e7e108a2872 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-08 01:14:56.935055 | orchestrator | ed491566551b registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-08 01:14:56.935062 | orchestrator | 4b8a0ef7e497 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-08 01:14:56.935069 | orchestrator | 280cd01763a5 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) designate_worker 2026-04-08 01:14:56.935075 | orchestrator | 7c77f26554aa registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-08 01:14:56.935082 | orchestrator | aee35aa6c0f1 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-08 01:14:56.935089 | orchestrator | 24ad3dbbf112 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-08 01:14:56.935096 | orchestrator | a8b08a003bb5 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-08 01:14:56.935102 | orchestrator | 6c59c3957918 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-08 01:14:56.935109 | orchestrator | 551bbf07e068 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-08 01:14:56.935115 | orchestrator | dd5b06401370 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-08 01:14:56.935146 | orchestrator | bd70de773b55 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-08 01:14:56.935153 | orchestrator | 2279c13c09b4 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-08 01:14:56.935176 | orchestrator | ad96d8ddc861 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-08 01:14:56.935184 | orchestrator | fbe41a83e1f6 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-08 01:14:56.935190 | orchestrator | d85bea682e7b registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-04-08 01:14:56.935196 | orchestrator | 361a46f4d602 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-04-08 01:14:56.935202 | orchestrator | c8d4d076f21f registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-08 01:14:56.935208 | orchestrator | 436be304dca6 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-08 01:14:56.935220 | orchestrator | 1b18baa77dc9 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-04-08 01:14:56.935228 | orchestrator | 3db3c4425cf8 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-04-08 01:14:56.935234 | orchestrator | 48fb0315130f registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-08 01:14:56.935241 | orchestrator | a3e46bcb6d95 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-04-08 01:14:56.935247 | orchestrator | d9b238f56424 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-08 01:14:56.935254 | orchestrator | 9fb20598e6e0 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-08 01:14:56.935260 | orchestrator | 9883a355ce17 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2026-04-08 01:14:56.935266 | orchestrator | a45c1b3367c2 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-04-08 01:14:56.935273 | orchestrator | 149821f17228 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-08 01:14:56.935279 | orchestrator | 2ff88491442a registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-04-08 01:14:56.935285 | orchestrator | a48d8232cfad registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-08 01:14:56.935291 | orchestrator | 31ecea789199 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-08 01:14:56.935313 | orchestrator | 17310e38f411 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-04-08 01:14:56.935319 | orchestrator | 39c7029e93b8 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-08 01:14:56.935324 | orchestrator | f77f42b4292f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2026-04-08 01:14:56.935331 | orchestrator | 161c1daae3d8 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-04-08 01:14:56.935337 | orchestrator | 388b8dce0624 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-04-08 01:14:56.935354 | orchestrator | ecbae0db67a7 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-08 01:14:56.935361 | orchestrator | 783b1b4c8542 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2026-04-08 01:14:56.935368 | orchestrator | 94ecda30aa75 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2026-04-08 01:14:56.935374 | orchestrator | bfb9719f63f5 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2026-04-08 01:14:56.935380 | orchestrator | a630e307f239 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2026-04-08 01:14:56.935386 | orchestrator | 1cee6d22dde9 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-04-08 01:14:56.935393 | orchestrator | c9b899dd884e registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-04-08 01:14:56.935399 | orchestrator | 55ccfc81d02e registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-04-08 01:14:56.935410 | orchestrator | 351c82b1bb46 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2026-04-08 01:14:56.935417 | orchestrator | b15616bef8ac registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2026-04-08 01:14:56.935423 | orchestrator | 19c5b9a4b2d1 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2026-04-08 01:14:56.935429 | orchestrator | 123f8c03e6a3 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2026-04-08 01:14:56.935436 | orchestrator | 7ddb2008d8af registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-08 01:14:56.935443 | orchestrator | b06ae3447449 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-08 01:14:56.935454 | orchestrator | c023abe7ce4f registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-08 01:14:57.080646 | orchestrator | 2026-04-08 01:14:57.080723 | orchestrator | ## Images @ testbed-node-1 2026-04-08 01:14:57.080735 | orchestrator | 2026-04-08 01:14:57.080745 | orchestrator | + echo 2026-04-08 01:14:57.080752 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-08 01:14:57.080759 | orchestrator | + echo 2026-04-08 01:14:57.080765 | orchestrator | + osism container testbed-node-1 images 2026-04-08 01:14:58.588981 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-08 01:14:58.589053 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 e7e1000379ba 4 hours ago 1.56GB 2026-04-08 01:14:58.589059 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 e9fd03803182 4 hours ago 1.53GB 2026-04-08 01:14:58.589064 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 b4f3d996542f 4 hours ago 276MB 2026-04-08 01:14:58.589068 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e5d0d6cbf841 4 hours ago 265MB 2026-04-08 01:14:58.589073 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 f7da104fc809 4 hours ago 322MB 2026-04-08 01:14:58.589077 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 baaa05b258f9 4 hours ago 1.03GB 2026-04-08 01:14:58.589081 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 c77181aa2654 4 hours ago 274MB 2026-04-08 01:14:58.589084 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d1393b5d2d13 4 hours ago 411MB 2026-04-08 01:14:58.589088 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 8ce11f7bb659 4 hours ago 579MB 2026-04-08 01:14:58.589092 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 515554dc651b 4 hours ago 668MB 2026-04-08 01:14:58.589096 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 e99d8a4b6918 4 hours ago 266MB 2026-04-08 01:14:58.589100 | orchestrator | registry.osism.tech/kolla/redis 2024.2 1cc68fc22173 4 hours ago 273MB 2026-04-08 01:14:58.589104 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 fb766a2342f7 4 hours ago 273MB 2026-04-08 01:14:58.589108 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 01338ec37520 4 hours ago 1.15GB 2026-04-08 01:14:58.589112 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 9074a14f92af 4 hours ago 452MB 2026-04-08 01:14:58.589117 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 cea843dcad68 4 hours ago 298MB 2026-04-08 01:14:58.589121 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 8aeaa2f3cf45 4 hours ago 357MB 2026-04-08 01:14:58.589125 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 59a26bbc0b5c 4 hours ago 292MB 2026-04-08 01:14:58.589129 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 f04feaa9bd71 4 hours ago 301MB 2026-04-08 01:14:58.589133 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 efbc78a1fee5 4 hours ago 306MB 2026-04-08 01:14:58.589136 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 0ee1cb629284 4 hours ago 279MB 2026-04-08 01:14:58.589140 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 35cd0cb7d412 4 hours ago 279MB 2026-04-08 01:14:58.589144 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 b424ae9dbfc8 4 hours ago 975MB 2026-04-08 01:14:58.589150 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 0bd9f8fbf55b 4 hours ago 1.4GB 2026-04-08 01:14:58.589156 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 c5cf25e387a9 4 hours ago 1.41GB 2026-04-08 01:14:58.589184 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 fc4f10925ed0 4 hours ago 1.41GB 2026-04-08 01:14:58.589190 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 a6f266c11261 4 hours ago 1.72GB 2026-04-08 01:14:58.589196 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 e29e4c213d69 4 hours ago 990MB 2026-04-08 01:14:58.589202 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 88b36f22b601 4 hours ago 991MB 2026-04-08 01:14:58.589208 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 13c85d4c89eb 4 hours ago 991MB 2026-04-08 01:14:58.589215 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 4e796c849388 4 hours ago 1.16GB 2026-04-08 01:14:58.589221 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 fe55703a989d 4 hours ago 1.04GB 2026-04-08 01:14:58.589227 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 82a961f19358 4 hours ago 1.04GB 2026-04-08 01:14:58.589263 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 d316b7d10a35 4 hours ago 1.07GB 2026-04-08 01:14:58.589269 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 301ddcb72125 4 hours ago 1.13GB 2026-04-08 01:14:58.589273 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 d6c71219c8af 4 hours ago 1.24GB 2026-04-08 01:14:58.589289 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 926bf7bd7183 4 hours ago 1.03GB 2026-04-08 01:14:58.589293 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 0a84f1ed4d84 4 hours ago 1.05GB 2026-04-08 01:14:58.589297 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 809c42c79e68 4 hours ago 1.03GB 2026-04-08 01:14:58.589301 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 92ed9b8c34b0 4 hours ago 1.05GB 2026-04-08 01:14:58.589305 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 f365649c6be2 4 hours ago 1.03GB 2026-04-08 01:14:58.589309 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 463d0a2329dc 4 hours ago 1.1GB 2026-04-08 01:14:58.589313 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 e82b8dba8efd 4 hours ago 989MB 2026-04-08 01:14:58.589317 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 8bbacb4b241d 4 hours ago 983MB 2026-04-08 01:14:58.589320 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 3e388f7070d2 4 hours ago 984MB 2026-04-08 01:14:58.589324 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 840f84772dfa 4 hours ago 984MB 2026-04-08 01:14:58.589328 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 23ad79161a59 4 hours ago 989MB 2026-04-08 01:14:58.589332 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 2b840465e874 4 hours ago 984MB 2026-04-08 01:14:58.589336 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 2c07c0b95371 4 hours ago 1.21GB 2026-04-08 01:14:58.589340 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 fe29168ba70c 4 hours ago 1.37GB 2026-04-08 01:14:58.589344 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7310f3d05dba 4 hours ago 1.21GB 2026-04-08 01:14:58.589348 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 a947ca672a9a 4 hours ago 1.21GB 2026-04-08 01:14:58.589352 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 107ea1b8d299 4 hours ago 840MB 2026-04-08 01:14:58.589355 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 f85c55370419 4 hours ago 840MB 2026-04-08 01:14:58.589359 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 5337b2234a85 4 hours ago 840MB 2026-04-08 01:14:58.589370 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 d7cfc4f8b643 4 hours ago 840MB 2026-04-08 01:14:58.589377 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 01985efead8e 21 hours ago 1.35GB 2026-04-08 01:14:58.732759 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-08 01:14:58.733194 | orchestrator | ++ semver latest 5.0.0 2026-04-08 01:14:58.784993 | orchestrator | 2026-04-08 01:14:58.785059 | orchestrator | ## Containers @ testbed-node-2 2026-04-08 01:14:58.785066 | orchestrator | 2026-04-08 01:14:58.785070 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-08 01:14:58.785075 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-08 01:14:58.785079 | orchestrator | + echo 2026-04-08 01:14:58.785084 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-08 01:14:58.785089 | orchestrator | + echo 2026-04-08 01:14:58.785093 | orchestrator | + osism container testbed-node-2 ps 2026-04-08 01:15:00.296983 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-08 01:15:00.297121 | orchestrator | 361900e7bda9 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-08 01:15:00.297136 | orchestrator | 344ad178d8c5 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-08 01:15:00.297144 | orchestrator | 72b9028dd050 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-08 01:15:00.297151 | orchestrator | c4a5c30e7d84 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-08 01:15:00.297158 | orchestrator | a35c5c3c3f7a registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-04-08 01:15:00.297166 | orchestrator | 5ccc82fd4d26 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-08 01:15:00.297173 | orchestrator | d2cb9c3b3266 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-08 01:15:00.297180 | orchestrator | fdad09663ad7 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-08 01:15:00.297187 | orchestrator | ac8033d1bede registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-08 01:15:00.297193 | orchestrator | 6d9a00d819a1 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-08 01:15:00.297200 | orchestrator | a4f6ca7805f3 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-08 01:15:00.297206 | orchestrator | cf258cd35fb3 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-08 01:15:00.297213 | orchestrator | 804ccdfd63c5 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-08 01:15:00.297220 | orchestrator | 27469114e13b registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-08 01:15:00.297244 | orchestrator | 975642f4dae5 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-08 01:15:00.297273 | orchestrator | 933e1cc22e4b registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-08 01:15:00.297280 | orchestrator | c127c00c5342 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-08 01:15:00.297286 | orchestrator | b4a502b921f3 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-08 01:15:00.297292 | orchestrator | 8bdb75e52b76 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-08 01:15:00.297299 | orchestrator | c64348397075 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-08 01:15:00.297305 | orchestrator | 4bafdc6778b2 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-08 01:15:00.297328 | orchestrator | 56f5fa987e22 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-08 01:15:00.297335 | orchestrator | e3f524eeccfb registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-08 01:15:00.297341 | orchestrator | f45d3642abd9 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-04-08 01:15:00.297347 | orchestrator | f30d318a8ae4 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-04-08 01:15:00.297355 | orchestrator | 8554aafdaa12 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-08 01:15:00.297362 | orchestrator | df887fb6b65c registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-08 01:15:00.297368 | orchestrator | d27e520f55f0 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-04-08 01:15:00.297375 | orchestrator | c5eeea016c37 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-04-08 01:15:00.297381 | orchestrator | 6a1061742e4f registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-08 01:15:00.297388 | orchestrator | 3d70eb408422 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-04-08 01:15:00.297394 | orchestrator | ca0c00d61c5c registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-08 01:15:00.297401 | orchestrator | 221011fd4c7f registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-08 01:15:00.297407 | orchestrator | 413b8f9980fa registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2026-04-08 01:15:00.297420 | orchestrator | 17680a5527a5 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-04-08 01:15:00.297427 | orchestrator | e8a924528355 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-08 01:15:00.297434 | orchestrator | 1be6f71d5100 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-04-08 01:15:00.297440 | orchestrator | 291d7418112b registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-08 01:15:00.297548 | orchestrator | e41c233016a4 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-08 01:15:00.297559 | orchestrator | 6063d286e963 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-04-08 01:15:00.297566 | orchestrator | 722f3d1f9d65 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-08 01:15:00.297573 | orchestrator | e70373d537e8 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2026-04-08 01:15:00.297580 | orchestrator | de5a967fd94b registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-04-08 01:15:00.297586 | orchestrator | d75052d3cb1e registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-04-08 01:15:00.297592 | orchestrator | 3c9acd960e39 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-08 01:15:00.297599 | orchestrator | afeba1c36c76 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2026-04-08 01:15:00.297605 | orchestrator | 768750d97603 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2026-04-08 01:15:00.297612 | orchestrator | a51a2f4b2ee3 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2026-04-08 01:15:00.297618 | orchestrator | 4502127cee14 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-04-08 01:15:00.297631 | orchestrator | d23490622b0e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-2 2026-04-08 01:15:00.297638 | orchestrator | d1068836e105 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-04-08 01:15:00.297643 | orchestrator | 1d46f5716695 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-04-08 01:15:00.297650 | orchestrator | e14d84da9eac registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2026-04-08 01:15:00.297656 | orchestrator | d0757399dbe6 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2026-04-08 01:15:00.297669 | orchestrator | 5cedb671c3b7 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2026-04-08 01:15:00.297676 | orchestrator | 6b7ca24eb0bd registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2026-04-08 01:15:00.297682 | orchestrator | 3bc57d313aef registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-08 01:15:00.297689 | orchestrator | 9c25f75164a2 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-08 01:15:00.297694 | orchestrator | 6d80fab38d41 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-08 01:15:00.443154 | orchestrator | 2026-04-08 01:15:00.443230 | orchestrator | ## Images @ testbed-node-2 2026-04-08 01:15:00.443238 | orchestrator | 2026-04-08 01:15:00.443243 | orchestrator | + echo 2026-04-08 01:15:00.443248 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-08 01:15:00.443256 | orchestrator | + echo 2026-04-08 01:15:00.443263 | orchestrator | + osism container testbed-node-2 images 2026-04-08 01:15:01.917036 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-08 01:15:01.917110 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 e7e1000379ba 4 hours ago 1.56GB 2026-04-08 01:15:01.917116 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 e9fd03803182 4 hours ago 1.53GB 2026-04-08 01:15:01.917133 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 b4f3d996542f 4 hours ago 276MB 2026-04-08 01:15:01.917137 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e5d0d6cbf841 4 hours ago 265MB 2026-04-08 01:15:01.917141 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 f7da104fc809 4 hours ago 322MB 2026-04-08 01:15:01.917145 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 baaa05b258f9 4 hours ago 1.03GB 2026-04-08 01:15:01.917149 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 c77181aa2654 4 hours ago 274MB 2026-04-08 01:15:01.917153 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d1393b5d2d13 4 hours ago 411MB 2026-04-08 01:15:01.917157 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 8ce11f7bb659 4 hours ago 579MB 2026-04-08 01:15:01.917161 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 515554dc651b 4 hours ago 668MB 2026-04-08 01:15:01.917165 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 e99d8a4b6918 4 hours ago 266MB 2026-04-08 01:15:01.917169 | orchestrator | registry.osism.tech/kolla/redis 2024.2 1cc68fc22173 4 hours ago 273MB 2026-04-08 01:15:01.917173 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 fb766a2342f7 4 hours ago 273MB 2026-04-08 01:15:01.917177 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 01338ec37520 4 hours ago 1.15GB 2026-04-08 01:15:01.917181 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 9074a14f92af 4 hours ago 452MB 2026-04-08 01:15:01.917185 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 cea843dcad68 4 hours ago 298MB 2026-04-08 01:15:01.917189 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 8aeaa2f3cf45 4 hours ago 357MB 2026-04-08 01:15:01.917193 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 59a26bbc0b5c 4 hours ago 292MB 2026-04-08 01:15:01.917197 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 f04feaa9bd71 4 hours ago 301MB 2026-04-08 01:15:01.917214 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 efbc78a1fee5 4 hours ago 306MB 2026-04-08 01:15:01.917218 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 0ee1cb629284 4 hours ago 279MB 2026-04-08 01:15:01.917222 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 35cd0cb7d412 4 hours ago 279MB 2026-04-08 01:15:01.917226 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 b424ae9dbfc8 4 hours ago 975MB 2026-04-08 01:15:01.917230 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 0bd9f8fbf55b 4 hours ago 1.4GB 2026-04-08 01:15:01.917234 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 c5cf25e387a9 4 hours ago 1.41GB 2026-04-08 01:15:01.917240 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 fc4f10925ed0 4 hours ago 1.41GB 2026-04-08 01:15:01.917246 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 a6f266c11261 4 hours ago 1.72GB 2026-04-08 01:15:01.917252 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 e29e4c213d69 4 hours ago 990MB 2026-04-08 01:15:01.917261 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 88b36f22b601 4 hours ago 991MB 2026-04-08 01:15:01.917269 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 13c85d4c89eb 4 hours ago 991MB 2026-04-08 01:15:01.917275 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 4e796c849388 4 hours ago 1.16GB 2026-04-08 01:15:01.917281 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 fe55703a989d 4 hours ago 1.04GB 2026-04-08 01:15:01.917287 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 82a961f19358 4 hours ago 1.04GB 2026-04-08 01:15:01.917293 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 d316b7d10a35 4 hours ago 1.07GB 2026-04-08 01:15:01.917299 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 301ddcb72125 4 hours ago 1.13GB 2026-04-08 01:15:01.917306 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 d6c71219c8af 4 hours ago 1.24GB 2026-04-08 01:15:01.917324 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 926bf7bd7183 4 hours ago 1.03GB 2026-04-08 01:15:01.917331 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 0a84f1ed4d84 4 hours ago 1.05GB 2026-04-08 01:15:01.917337 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 809c42c79e68 4 hours ago 1.03GB 2026-04-08 01:15:01.917345 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 92ed9b8c34b0 4 hours ago 1.05GB 2026-04-08 01:15:01.917354 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 f365649c6be2 4 hours ago 1.03GB 2026-04-08 01:15:01.917360 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 463d0a2329dc 4 hours ago 1.1GB 2026-04-08 01:15:01.917367 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 e82b8dba8efd 4 hours ago 989MB 2026-04-08 01:15:01.917372 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 8bbacb4b241d 4 hours ago 983MB 2026-04-08 01:15:01.917379 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 3e388f7070d2 4 hours ago 984MB 2026-04-08 01:15:01.917395 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 840f84772dfa 4 hours ago 984MB 2026-04-08 01:15:01.917408 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 23ad79161a59 4 hours ago 989MB 2026-04-08 01:15:01.917414 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 2b840465e874 4 hours ago 984MB 2026-04-08 01:15:01.917428 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 2c07c0b95371 4 hours ago 1.21GB 2026-04-08 01:15:01.917441 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 fe29168ba70c 4 hours ago 1.37GB 2026-04-08 01:15:01.917446 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7310f3d05dba 4 hours ago 1.21GB 2026-04-08 01:15:01.917450 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 a947ca672a9a 4 hours ago 1.21GB 2026-04-08 01:15:01.917454 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 107ea1b8d299 4 hours ago 840MB 2026-04-08 01:15:01.917459 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 f85c55370419 4 hours ago 840MB 2026-04-08 01:15:01.917465 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 d7cfc4f8b643 4 hours ago 840MB 2026-04-08 01:15:01.917472 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 5337b2234a85 4 hours ago 840MB 2026-04-08 01:15:01.917478 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 01985efead8e 21 hours ago 1.35GB 2026-04-08 01:15:02.062334 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-08 01:15:02.071734 | orchestrator | + set -e 2026-04-08 01:15:02.071823 | orchestrator | + source /opt/manager-vars.sh 2026-04-08 01:15:02.073479 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-08 01:15:02.073642 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-08 01:15:02.073659 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-08 01:15:02.073666 | orchestrator | ++ CEPH_VERSION=reef 2026-04-08 01:15:02.073673 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-08 01:15:02.073681 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-08 01:15:02.073687 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-08 01:15:02.073693 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-08 01:15:02.073700 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-08 01:15:02.073706 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-08 01:15:02.073712 | orchestrator | ++ export ARA=false 2026-04-08 01:15:02.073718 | orchestrator | ++ ARA=false 2026-04-08 01:15:02.073724 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-08 01:15:02.073730 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-08 01:15:02.073736 | orchestrator | ++ export TEMPEST=true 2026-04-08 01:15:02.073742 | orchestrator | ++ TEMPEST=true 2026-04-08 01:15:02.073749 | orchestrator | ++ export IS_ZUUL=true 2026-04-08 01:15:02.073755 | orchestrator | ++ IS_ZUUL=true 2026-04-08 01:15:02.073761 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.187 2026-04-08 01:15:02.073768 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.187 2026-04-08 01:15:02.073774 | orchestrator | ++ export EXTERNAL_API=false 2026-04-08 01:15:02.073781 | orchestrator | ++ EXTERNAL_API=false 2026-04-08 01:15:02.073787 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-08 01:15:02.073793 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-08 01:15:02.073799 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-08 01:15:02.073807 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-08 01:15:02.073813 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-08 01:15:02.073820 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-08 01:15:02.073825 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-08 01:15:02.073832 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-08 01:15:02.080062 | orchestrator | + set -e 2026-04-08 01:15:02.080132 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-08 01:15:02.080140 | orchestrator | ++ export INTERACTIVE=false 2026-04-08 01:15:02.080147 | orchestrator | ++ INTERACTIVE=false 2026-04-08 01:15:02.080151 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-08 01:15:02.080156 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-08 01:15:02.080160 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-08 01:15:02.080352 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-08 01:15:02.084485 | orchestrator | 2026-04-08 01:15:02.084626 | orchestrator | # Ceph status 2026-04-08 01:15:02.084634 | orchestrator | 2026-04-08 01:15:02.084641 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-08 01:15:02.084649 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-08 01:15:02.084657 | orchestrator | + echo 2026-04-08 01:15:02.084663 | orchestrator | + echo '# Ceph status' 2026-04-08 01:15:02.084670 | orchestrator | + echo 2026-04-08 01:15:02.084676 | orchestrator | + ceph -s 2026-04-08 01:15:02.641002 | orchestrator | cluster: 2026-04-08 01:15:02.641115 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-08 01:15:02.641127 | orchestrator | health: HEALTH_OK 2026-04-08 01:15:02.641135 | orchestrator | 2026-04-08 01:15:02.641142 | orchestrator | services: 2026-04-08 01:15:02.641148 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 26m) 2026-04-08 01:15:02.641157 | orchestrator | mgr: testbed-node-0(active, since 16m), standbys: testbed-node-1, testbed-node-2 2026-04-08 01:15:02.641165 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-08 01:15:02.641172 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 23m) 2026-04-08 01:15:02.641179 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-08 01:15:02.641186 | orchestrator | 2026-04-08 01:15:02.641190 | orchestrator | data: 2026-04-08 01:15:02.641195 | orchestrator | volumes: 1/1 healthy 2026-04-08 01:15:02.641199 | orchestrator | pools: 14 pools, 401 pgs 2026-04-08 01:15:02.641203 | orchestrator | objects: 556 objects, 2.2 GiB 2026-04-08 01:15:02.641207 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-08 01:15:02.641211 | orchestrator | pgs: 401 active+clean 2026-04-08 01:15:02.641215 | orchestrator | 2026-04-08 01:15:02.691135 | orchestrator | 2026-04-08 01:15:02.691207 | orchestrator | # Ceph versions 2026-04-08 01:15:02.691213 | orchestrator | 2026-04-08 01:15:02.691218 | orchestrator | + echo 2026-04-08 01:15:02.691223 | orchestrator | + echo '# Ceph versions' 2026-04-08 01:15:02.691228 | orchestrator | + echo 2026-04-08 01:15:02.691233 | orchestrator | + ceph versions 2026-04-08 01:15:03.309380 | orchestrator | { 2026-04-08 01:15:03.309464 | orchestrator | "mon": { 2026-04-08 01:15:03.309475 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-08 01:15:03.309482 | orchestrator | }, 2026-04-08 01:15:03.309488 | orchestrator | "mgr": { 2026-04-08 01:15:03.309585 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-08 01:15:03.309594 | orchestrator | }, 2026-04-08 01:15:03.309600 | orchestrator | "osd": { 2026-04-08 01:15:03.309605 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-04-08 01:15:03.309611 | orchestrator | }, 2026-04-08 01:15:03.309616 | orchestrator | "mds": { 2026-04-08 01:15:03.309622 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-08 01:15:03.309628 | orchestrator | }, 2026-04-08 01:15:03.309634 | orchestrator | "rgw": { 2026-04-08 01:15:03.309640 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-08 01:15:03.309646 | orchestrator | }, 2026-04-08 01:15:03.309652 | orchestrator | "overall": { 2026-04-08 01:15:03.309658 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-04-08 01:15:03.309665 | orchestrator | } 2026-04-08 01:15:03.309671 | orchestrator | } 2026-04-08 01:15:03.355878 | orchestrator | 2026-04-08 01:15:03.355951 | orchestrator | # Ceph OSD tree 2026-04-08 01:15:03.355957 | orchestrator | 2026-04-08 01:15:03.355962 | orchestrator | + echo 2026-04-08 01:15:03.355967 | orchestrator | + echo '# Ceph OSD tree' 2026-04-08 01:15:03.355972 | orchestrator | + echo 2026-04-08 01:15:03.355976 | orchestrator | + ceph osd df tree 2026-04-08 01:15:03.857045 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-08 01:15:03.857147 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-04-08 01:15:03.857155 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-04-08 01:15:03.857159 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.35 1.24 201 up osd.0 2026-04-08 01:15:03.857164 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 916 MiB 843 MiB 1 KiB 74 MiB 19 GiB 4.48 0.76 189 up osd.5 2026-04-08 01:15:03.857168 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-04-08 01:15:03.857172 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.44 1.09 190 up osd.1 2026-04-08 01:15:03.857176 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.40 0.91 202 up osd.4 2026-04-08 01:15:03.857199 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-04-08 01:15:03.857203 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.73 1.14 188 up osd.2 2026-04-08 01:15:03.857207 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.0 GiB 971 MiB 1 KiB 74 MiB 19 GiB 5.10 0.86 200 up osd.3 2026-04-08 01:15:03.857211 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-04-08 01:15:03.857215 | orchestrator | MIN/MAX VAR: 0.76/1.24 STDDEV: 1.00 2026-04-08 01:15:03.920591 | orchestrator | 2026-04-08 01:15:03.920681 | orchestrator | # Ceph monitor status 2026-04-08 01:15:03.920690 | orchestrator | 2026-04-08 01:15:03.920696 | orchestrator | + echo 2026-04-08 01:15:03.920700 | orchestrator | + echo '# Ceph monitor status' 2026-04-08 01:15:03.920705 | orchestrator | + echo 2026-04-08 01:15:03.920709 | orchestrator | + ceph mon stat 2026-04-08 01:15:04.537831 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-08 01:15:04.581846 | orchestrator | 2026-04-08 01:15:04.581917 | orchestrator | # Ceph quorum status 2026-04-08 01:15:04.581924 | orchestrator | 2026-04-08 01:15:04.581930 | orchestrator | + echo 2026-04-08 01:15:04.581934 | orchestrator | + echo '# Ceph quorum status' 2026-04-08 01:15:04.581939 | orchestrator | + echo 2026-04-08 01:15:04.582153 | orchestrator | + ceph quorum_status 2026-04-08 01:15:04.582310 | orchestrator | + jq 2026-04-08 01:15:05.240770 | orchestrator | { 2026-04-08 01:15:05.240859 | orchestrator | "election_epoch": 6, 2026-04-08 01:15:05.240866 | orchestrator | "quorum": [ 2026-04-08 01:15:05.240871 | orchestrator | 0, 2026-04-08 01:15:05.240875 | orchestrator | 1, 2026-04-08 01:15:05.240879 | orchestrator | 2 2026-04-08 01:15:05.240883 | orchestrator | ], 2026-04-08 01:15:05.240887 | orchestrator | "quorum_names": [ 2026-04-08 01:15:05.240891 | orchestrator | "testbed-node-0", 2026-04-08 01:15:05.240896 | orchestrator | "testbed-node-1", 2026-04-08 01:15:05.240901 | orchestrator | "testbed-node-2" 2026-04-08 01:15:05.240905 | orchestrator | ], 2026-04-08 01:15:05.240909 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-08 01:15:05.240914 | orchestrator | "quorum_age": 1575, 2026-04-08 01:15:05.240917 | orchestrator | "features": { 2026-04-08 01:15:05.240922 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-08 01:15:05.240925 | orchestrator | "quorum_mon": [ 2026-04-08 01:15:05.240930 | orchestrator | "kraken", 2026-04-08 01:15:05.240933 | orchestrator | "luminous", 2026-04-08 01:15:05.240937 | orchestrator | "mimic", 2026-04-08 01:15:05.240942 | orchestrator | "osdmap-prune", 2026-04-08 01:15:05.240945 | orchestrator | "nautilus", 2026-04-08 01:15:05.240949 | orchestrator | "octopus", 2026-04-08 01:15:05.240953 | orchestrator | "pacific", 2026-04-08 01:15:05.240957 | orchestrator | "elector-pinging", 2026-04-08 01:15:05.240961 | orchestrator | "quincy", 2026-04-08 01:15:05.240965 | orchestrator | "reef" 2026-04-08 01:15:05.240969 | orchestrator | ] 2026-04-08 01:15:05.240973 | orchestrator | }, 2026-04-08 01:15:05.240977 | orchestrator | "monmap": { 2026-04-08 01:15:05.240981 | orchestrator | "epoch": 1, 2026-04-08 01:15:05.240985 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-08 01:15:05.240989 | orchestrator | "modified": "2026-04-08T00:48:26.680495Z", 2026-04-08 01:15:05.240993 | orchestrator | "created": "2026-04-08T00:48:26.680495Z", 2026-04-08 01:15:05.240997 | orchestrator | "min_mon_release": 18, 2026-04-08 01:15:05.241001 | orchestrator | "min_mon_release_name": "reef", 2026-04-08 01:15:05.241005 | orchestrator | "election_strategy": 1, 2026-04-08 01:15:05.241009 | orchestrator | "disallowed_leaders": "", 2026-04-08 01:15:05.241013 | orchestrator | "stretch_mode": false, 2026-04-08 01:15:05.241017 | orchestrator | "tiebreaker_mon": "", 2026-04-08 01:15:05.241021 | orchestrator | "removed_ranks": "", 2026-04-08 01:15:05.241024 | orchestrator | "features": { 2026-04-08 01:15:05.241029 | orchestrator | "persistent": [ 2026-04-08 01:15:05.241032 | orchestrator | "kraken", 2026-04-08 01:15:05.241036 | orchestrator | "luminous", 2026-04-08 01:15:05.241040 | orchestrator | "mimic", 2026-04-08 01:15:05.241044 | orchestrator | "osdmap-prune", 2026-04-08 01:15:05.241061 | orchestrator | "nautilus", 2026-04-08 01:15:05.241065 | orchestrator | "octopus", 2026-04-08 01:15:05.241069 | orchestrator | "pacific", 2026-04-08 01:15:05.241073 | orchestrator | "elector-pinging", 2026-04-08 01:15:05.241077 | orchestrator | "quincy", 2026-04-08 01:15:05.241081 | orchestrator | "reef" 2026-04-08 01:15:05.241085 | orchestrator | ], 2026-04-08 01:15:05.241089 | orchestrator | "optional": [] 2026-04-08 01:15:05.241092 | orchestrator | }, 2026-04-08 01:15:05.241096 | orchestrator | "mons": [ 2026-04-08 01:15:05.241100 | orchestrator | { 2026-04-08 01:15:05.241104 | orchestrator | "rank": 0, 2026-04-08 01:15:05.241108 | orchestrator | "name": "testbed-node-0", 2026-04-08 01:15:05.241112 | orchestrator | "public_addrs": { 2026-04-08 01:15:05.241116 | orchestrator | "addrvec": [ 2026-04-08 01:15:05.241120 | orchestrator | { 2026-04-08 01:15:05.241124 | orchestrator | "type": "v2", 2026-04-08 01:15:05.241128 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-08 01:15:05.241132 | orchestrator | "nonce": 0 2026-04-08 01:15:05.241135 | orchestrator | }, 2026-04-08 01:15:05.241140 | orchestrator | { 2026-04-08 01:15:05.241144 | orchestrator | "type": "v1", 2026-04-08 01:15:05.241147 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-08 01:15:05.241151 | orchestrator | "nonce": 0 2026-04-08 01:15:05.241157 | orchestrator | } 2026-04-08 01:15:05.241163 | orchestrator | ] 2026-04-08 01:15:05.241169 | orchestrator | }, 2026-04-08 01:15:05.241175 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-08 01:15:05.241181 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-08 01:15:05.241191 | orchestrator | "priority": 0, 2026-04-08 01:15:05.241199 | orchestrator | "weight": 0, 2026-04-08 01:15:05.241205 | orchestrator | "crush_location": "{}" 2026-04-08 01:15:05.241210 | orchestrator | }, 2026-04-08 01:15:05.241217 | orchestrator | { 2026-04-08 01:15:05.241223 | orchestrator | "rank": 1, 2026-04-08 01:15:05.241230 | orchestrator | "name": "testbed-node-1", 2026-04-08 01:15:05.241236 | orchestrator | "public_addrs": { 2026-04-08 01:15:05.241242 | orchestrator | "addrvec": [ 2026-04-08 01:15:05.241247 | orchestrator | { 2026-04-08 01:15:05.241253 | orchestrator | "type": "v2", 2026-04-08 01:15:05.241259 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-08 01:15:05.241266 | orchestrator | "nonce": 0 2026-04-08 01:15:05.241271 | orchestrator | }, 2026-04-08 01:15:05.241278 | orchestrator | { 2026-04-08 01:15:05.241284 | orchestrator | "type": "v1", 2026-04-08 01:15:05.241290 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-08 01:15:05.241296 | orchestrator | "nonce": 0 2026-04-08 01:15:05.241303 | orchestrator | } 2026-04-08 01:15:05.241309 | orchestrator | ] 2026-04-08 01:15:05.241316 | orchestrator | }, 2026-04-08 01:15:05.241379 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-08 01:15:05.241388 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-08 01:15:05.241395 | orchestrator | "priority": 0, 2026-04-08 01:15:05.241402 | orchestrator | "weight": 0, 2026-04-08 01:15:05.241407 | orchestrator | "crush_location": "{}" 2026-04-08 01:15:05.241411 | orchestrator | }, 2026-04-08 01:15:05.241416 | orchestrator | { 2026-04-08 01:15:05.241421 | orchestrator | "rank": 2, 2026-04-08 01:15:05.241425 | orchestrator | "name": "testbed-node-2", 2026-04-08 01:15:05.241430 | orchestrator | "public_addrs": { 2026-04-08 01:15:05.241435 | orchestrator | "addrvec": [ 2026-04-08 01:15:05.241439 | orchestrator | { 2026-04-08 01:15:05.241446 | orchestrator | "type": "v2", 2026-04-08 01:15:05.241452 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-08 01:15:05.241458 | orchestrator | "nonce": 0 2026-04-08 01:15:05.241465 | orchestrator | }, 2026-04-08 01:15:05.241471 | orchestrator | { 2026-04-08 01:15:05.241477 | orchestrator | "type": "v1", 2026-04-08 01:15:05.241483 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-08 01:15:05.241489 | orchestrator | "nonce": 0 2026-04-08 01:15:05.241496 | orchestrator | } 2026-04-08 01:15:05.241502 | orchestrator | ] 2026-04-08 01:15:05.241567 | orchestrator | }, 2026-04-08 01:15:05.241574 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-08 01:15:05.241579 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-08 01:15:05.241586 | orchestrator | "priority": 0, 2026-04-08 01:15:05.241593 | orchestrator | "weight": 0, 2026-04-08 01:15:05.241597 | orchestrator | "crush_location": "{}" 2026-04-08 01:15:05.241608 | orchestrator | } 2026-04-08 01:15:05.241612 | orchestrator | ] 2026-04-08 01:15:05.241616 | orchestrator | } 2026-04-08 01:15:05.241620 | orchestrator | } 2026-04-08 01:15:05.241699 | orchestrator | 2026-04-08 01:15:05.241705 | orchestrator | # Ceph free space status 2026-04-08 01:15:05.241709 | orchestrator | 2026-04-08 01:15:05.241713 | orchestrator | + echo 2026-04-08 01:15:05.241718 | orchestrator | + echo '# Ceph free space status' 2026-04-08 01:15:05.241722 | orchestrator | + echo 2026-04-08 01:15:05.241726 | orchestrator | + ceph df 2026-04-08 01:15:05.875707 | orchestrator | --- RAW STORAGE --- 2026-04-08 01:15:05.875782 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-08 01:15:05.875799 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-04-08 01:15:05.875803 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-04-08 01:15:05.875808 | orchestrator | 2026-04-08 01:15:05.875813 | orchestrator | --- POOLS --- 2026-04-08 01:15:05.875818 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-08 01:15:05.875824 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-04-08 01:15:05.875828 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-08 01:15:05.875833 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-08 01:15:05.875837 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-08 01:15:05.875841 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-08 01:15:05.875845 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-08 01:15:05.875849 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-08 01:15:05.875853 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-08 01:15:05.875856 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-04-08 01:15:05.875860 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-08 01:15:05.875864 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-08 01:15:05.875868 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.96 35 GiB 2026-04-08 01:15:05.875872 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-08 01:15:05.875876 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-08 01:15:05.932619 | orchestrator | ++ semver latest 5.0.0 2026-04-08 01:15:05.993776 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-08 01:15:05.993860 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-08 01:15:05.993871 | orchestrator | + osism apply facts 2026-04-08 01:15:17.350095 | orchestrator | 2026-04-08 01:15:17 | INFO  | Prepare task for execution of facts. 2026-04-08 01:15:17.428903 | orchestrator | 2026-04-08 01:15:17 | INFO  | Task b259812b-9e73-4b4a-929a-d9b385a45733 (facts) was prepared for execution. 2026-04-08 01:15:17.428991 | orchestrator | 2026-04-08 01:15:17 | INFO  | It takes a moment until task b259812b-9e73-4b4a-929a-d9b385a45733 (facts) has been started and output is visible here. 2026-04-08 01:15:29.896043 | orchestrator | 2026-04-08 01:15:29.896124 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-08 01:15:29.896131 | orchestrator | 2026-04-08 01:15:29.896136 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-08 01:15:29.896140 | orchestrator | Wednesday 08 April 2026 01:15:20 +0000 (0:00:00.420) 0:00:00.420 ******* 2026-04-08 01:15:29.896145 | orchestrator | ok: [testbed-manager] 2026-04-08 01:15:29.896150 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:15:29.896154 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:15:29.896158 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:15:29.896162 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:15:29.896167 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:15:29.896171 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:15:29.896175 | orchestrator | 2026-04-08 01:15:29.896179 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-08 01:15:29.896210 | orchestrator | Wednesday 08 April 2026 01:15:22 +0000 (0:00:01.433) 0:00:01.853 ******* 2026-04-08 01:15:29.896215 | orchestrator | skipping: [testbed-manager] 2026-04-08 01:15:29.896229 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:15:29.896234 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:15:29.896238 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:15:29.896242 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:15:29.896245 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:15:29.896249 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:15:29.896253 | orchestrator | 2026-04-08 01:15:29.896257 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-08 01:15:29.896261 | orchestrator | 2026-04-08 01:15:29.896265 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-08 01:15:29.896269 | orchestrator | Wednesday 08 April 2026 01:15:23 +0000 (0:00:01.345) 0:00:03.198 ******* 2026-04-08 01:15:29.896273 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:15:29.896277 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:15:29.896281 | orchestrator | ok: [testbed-manager] 2026-04-08 01:15:29.896284 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:15:29.896288 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:15:29.896292 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:15:29.896296 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:15:29.896300 | orchestrator | 2026-04-08 01:15:29.896303 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-08 01:15:29.896307 | orchestrator | 2026-04-08 01:15:29.896311 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-08 01:15:29.896315 | orchestrator | Wednesday 08 April 2026 01:15:28 +0000 (0:00:05.257) 0:00:08.455 ******* 2026-04-08 01:15:29.896319 | orchestrator | skipping: [testbed-manager] 2026-04-08 01:15:29.896323 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:15:29.896327 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:15:29.896331 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:15:29.896335 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:15:29.896338 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:15:29.896350 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:15:29.896354 | orchestrator | 2026-04-08 01:15:29.896358 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:15:29.896368 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 01:15:29.896373 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 01:15:29.896377 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 01:15:29.896381 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 01:15:29.896385 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 01:15:29.896389 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 01:15:29.896392 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 01:15:29.896396 | orchestrator | 2026-04-08 01:15:29.896400 | orchestrator | 2026-04-08 01:15:29.896404 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:15:29.896408 | orchestrator | Wednesday 08 April 2026 01:15:29 +0000 (0:00:00.720) 0:00:09.176 ******* 2026-04-08 01:15:29.896412 | orchestrator | =============================================================================== 2026-04-08 01:15:29.896416 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.26s 2026-04-08 01:15:29.896423 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.43s 2026-04-08 01:15:29.896427 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.35s 2026-04-08 01:15:29.896431 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2026-04-08 01:15:30.153037 | orchestrator | + osism validate ceph-mons 2026-04-08 01:16:02.361110 | orchestrator | 2026-04-08 01:16:02.361210 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-08 01:16:02.361224 | orchestrator | 2026-04-08 01:16:02.361231 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-08 01:16:02.361239 | orchestrator | Wednesday 08 April 2026 01:15:45 +0000 (0:00:00.531) 0:00:00.531 ******* 2026-04-08 01:16:02.361245 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:02.361251 | orchestrator | 2026-04-08 01:16:02.361257 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-08 01:16:02.361264 | orchestrator | Wednesday 08 April 2026 01:15:46 +0000 (0:00:00.979) 0:00:01.510 ******* 2026-04-08 01:16:02.361271 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:02.361278 | orchestrator | 2026-04-08 01:16:02.361285 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-08 01:16:02.361291 | orchestrator | Wednesday 08 April 2026 01:15:47 +0000 (0:00:00.746) 0:00:02.256 ******* 2026-04-08 01:16:02.361298 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:02.361306 | orchestrator | 2026-04-08 01:16:02.361312 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-08 01:16:02.361319 | orchestrator | Wednesday 08 April 2026 01:15:47 +0000 (0:00:00.143) 0:00:02.400 ******* 2026-04-08 01:16:02.361326 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:02.361332 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:16:02.361339 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:16:02.361345 | orchestrator | 2026-04-08 01:16:02.361370 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-08 01:16:02.361377 | orchestrator | Wednesday 08 April 2026 01:15:47 +0000 (0:00:00.280) 0:00:02.681 ******* 2026-04-08 01:16:02.361384 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:16:02.361391 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:16:02.361397 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:02.361403 | orchestrator | 2026-04-08 01:16:02.361410 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-08 01:16:02.361424 | orchestrator | Wednesday 08 April 2026 01:15:49 +0000 (0:00:01.638) 0:00:04.319 ******* 2026-04-08 01:16:02.361430 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:02.361437 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:16:02.361444 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:16:02.361450 | orchestrator | 2026-04-08 01:16:02.361476 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-08 01:16:02.361482 | orchestrator | Wednesday 08 April 2026 01:15:49 +0000 (0:00:00.300) 0:00:04.619 ******* 2026-04-08 01:16:02.361488 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:02.361494 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:16:02.361500 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:16:02.361507 | orchestrator | 2026-04-08 01:16:02.361513 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-08 01:16:02.361520 | orchestrator | Wednesday 08 April 2026 01:15:49 +0000 (0:00:00.300) 0:00:04.920 ******* 2026-04-08 01:16:02.361526 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:02.361533 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:16:02.361539 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:16:02.361546 | orchestrator | 2026-04-08 01:16:02.361552 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-08 01:16:02.361558 | orchestrator | Wednesday 08 April 2026 01:15:50 +0000 (0:00:00.323) 0:00:05.244 ******* 2026-04-08 01:16:02.361564 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:02.361639 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:16:02.361647 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:16:02.361653 | orchestrator | 2026-04-08 01:16:02.361660 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-08 01:16:02.361666 | orchestrator | Wednesday 08 April 2026 01:15:50 +0000 (0:00:00.563) 0:00:05.807 ******* 2026-04-08 01:16:02.361673 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:02.361679 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:16:02.361685 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:16:02.361692 | orchestrator | 2026-04-08 01:16:02.361714 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-08 01:16:02.361721 | orchestrator | Wednesday 08 April 2026 01:15:50 +0000 (0:00:00.323) 0:00:06.131 ******* 2026-04-08 01:16:02.361727 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:02.361734 | orchestrator | 2026-04-08 01:16:02.361740 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-08 01:16:02.361747 | orchestrator | Wednesday 08 April 2026 01:15:51 +0000 (0:00:00.257) 0:00:06.388 ******* 2026-04-08 01:16:02.361753 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:02.361759 | orchestrator | 2026-04-08 01:16:02.361766 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-08 01:16:02.361773 | orchestrator | Wednesday 08 April 2026 01:15:51 +0000 (0:00:00.254) 0:00:06.643 ******* 2026-04-08 01:16:02.361780 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:02.361786 | orchestrator | 2026-04-08 01:16:02.361793 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:16:02.361799 | orchestrator | Wednesday 08 April 2026 01:15:51 +0000 (0:00:00.284) 0:00:06.927 ******* 2026-04-08 01:16:02.361806 | orchestrator | 2026-04-08 01:16:02.361812 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:16:02.361818 | orchestrator | Wednesday 08 April 2026 01:15:51 +0000 (0:00:00.076) 0:00:07.004 ******* 2026-04-08 01:16:02.361825 | orchestrator | 2026-04-08 01:16:02.361831 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:16:02.361837 | orchestrator | Wednesday 08 April 2026 01:15:51 +0000 (0:00:00.083) 0:00:07.087 ******* 2026-04-08 01:16:02.361843 | orchestrator | 2026-04-08 01:16:02.361849 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-08 01:16:02.361856 | orchestrator | Wednesday 08 April 2026 01:15:52 +0000 (0:00:00.321) 0:00:07.409 ******* 2026-04-08 01:16:02.361862 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:02.361869 | orchestrator | 2026-04-08 01:16:02.361875 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-08 01:16:02.361881 | orchestrator | Wednesday 08 April 2026 01:15:52 +0000 (0:00:00.260) 0:00:07.669 ******* 2026-04-08 01:16:02.361887 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:02.361892 | orchestrator | 2026-04-08 01:16:02.361917 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-08 01:16:02.361924 | orchestrator | Wednesday 08 April 2026 01:15:52 +0000 (0:00:00.266) 0:00:07.936 ******* 2026-04-08 01:16:02.361930 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:02.361935 | orchestrator | 2026-04-08 01:16:02.361941 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-08 01:16:02.361947 | orchestrator | Wednesday 08 April 2026 01:15:52 +0000 (0:00:00.122) 0:00:08.059 ******* 2026-04-08 01:16:02.361953 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:16:02.361959 | orchestrator | 2026-04-08 01:16:02.361965 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-08 01:16:02.361971 | orchestrator | Wednesday 08 April 2026 01:15:54 +0000 (0:00:01.848) 0:00:09.907 ******* 2026-04-08 01:16:02.361978 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:02.361984 | orchestrator | 2026-04-08 01:16:02.361990 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-08 01:16:02.361997 | orchestrator | Wednesday 08 April 2026 01:15:55 +0000 (0:00:00.345) 0:00:10.253 ******* 2026-04-08 01:16:02.362011 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:02.362071 | orchestrator | 2026-04-08 01:16:02.362078 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-08 01:16:02.362084 | orchestrator | Wednesday 08 April 2026 01:15:55 +0000 (0:00:00.130) 0:00:10.383 ******* 2026-04-08 01:16:02.362091 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:02.362097 | orchestrator | 2026-04-08 01:16:02.362104 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-08 01:16:02.362110 | orchestrator | Wednesday 08 April 2026 01:15:55 +0000 (0:00:00.322) 0:00:10.706 ******* 2026-04-08 01:16:02.362121 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:02.362127 | orchestrator | 2026-04-08 01:16:02.362134 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-08 01:16:02.362140 | orchestrator | Wednesday 08 April 2026 01:15:55 +0000 (0:00:00.314) 0:00:11.020 ******* 2026-04-08 01:16:02.362147 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:02.362154 | orchestrator | 2026-04-08 01:16:02.362161 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-08 01:16:02.362167 | orchestrator | Wednesday 08 April 2026 01:15:55 +0000 (0:00:00.119) 0:00:11.140 ******* 2026-04-08 01:16:02.362174 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:02.362180 | orchestrator | 2026-04-08 01:16:02.362185 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-08 01:16:02.362191 | orchestrator | Wednesday 08 April 2026 01:15:56 +0000 (0:00:00.122) 0:00:11.263 ******* 2026-04-08 01:16:02.362197 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:02.362203 | orchestrator | 2026-04-08 01:16:02.362210 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-08 01:16:02.362215 | orchestrator | Wednesday 08 April 2026 01:15:56 +0000 (0:00:00.313) 0:00:11.577 ******* 2026-04-08 01:16:02.362221 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:16:02.362227 | orchestrator | 2026-04-08 01:16:02.362233 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-08 01:16:02.362240 | orchestrator | Wednesday 08 April 2026 01:15:57 +0000 (0:00:01.490) 0:00:13.067 ******* 2026-04-08 01:16:02.362247 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:02.362253 | orchestrator | 2026-04-08 01:16:02.362259 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-08 01:16:02.362265 | orchestrator | Wednesday 08 April 2026 01:15:58 +0000 (0:00:00.338) 0:00:13.406 ******* 2026-04-08 01:16:02.362271 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:02.362277 | orchestrator | 2026-04-08 01:16:02.362283 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-08 01:16:02.362289 | orchestrator | Wednesday 08 April 2026 01:15:58 +0000 (0:00:00.142) 0:00:13.548 ******* 2026-04-08 01:16:02.362295 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:02.362302 | orchestrator | 2026-04-08 01:16:02.362309 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-08 01:16:02.362315 | orchestrator | Wednesday 08 April 2026 01:15:58 +0000 (0:00:00.155) 0:00:13.704 ******* 2026-04-08 01:16:02.362322 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:02.362328 | orchestrator | 2026-04-08 01:16:02.362334 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-08 01:16:02.362341 | orchestrator | Wednesday 08 April 2026 01:15:58 +0000 (0:00:00.140) 0:00:13.845 ******* 2026-04-08 01:16:02.362347 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:02.362354 | orchestrator | 2026-04-08 01:16:02.362360 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-08 01:16:02.362367 | orchestrator | Wednesday 08 April 2026 01:15:58 +0000 (0:00:00.142) 0:00:13.987 ******* 2026-04-08 01:16:02.362373 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:02.362380 | orchestrator | 2026-04-08 01:16:02.362386 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-08 01:16:02.362393 | orchestrator | Wednesday 08 April 2026 01:15:59 +0000 (0:00:00.269) 0:00:14.257 ******* 2026-04-08 01:16:02.362405 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:02.362412 | orchestrator | 2026-04-08 01:16:02.362423 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-08 01:16:02.362430 | orchestrator | Wednesday 08 April 2026 01:15:59 +0000 (0:00:00.266) 0:00:14.524 ******* 2026-04-08 01:16:02.362436 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:02.362443 | orchestrator | 2026-04-08 01:16:02.362449 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-08 01:16:02.362495 | orchestrator | Wednesday 08 April 2026 01:16:01 +0000 (0:00:02.001) 0:00:16.525 ******* 2026-04-08 01:16:02.362503 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:02.362509 | orchestrator | 2026-04-08 01:16:02.362514 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-08 01:16:02.362520 | orchestrator | Wednesday 08 April 2026 01:16:01 +0000 (0:00:00.261) 0:00:16.786 ******* 2026-04-08 01:16:02.362527 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:02.362533 | orchestrator | 2026-04-08 01:16:02.362547 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:16:04.660383 | orchestrator | Wednesday 08 April 2026 01:16:02 +0000 (0:00:00.805) 0:00:17.592 ******* 2026-04-08 01:16:04.660504 | orchestrator | 2026-04-08 01:16:04.660517 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:16:04.660525 | orchestrator | Wednesday 08 April 2026 01:16:02 +0000 (0:00:00.078) 0:00:17.670 ******* 2026-04-08 01:16:04.660532 | orchestrator | 2026-04-08 01:16:04.660538 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:16:04.660545 | orchestrator | Wednesday 08 April 2026 01:16:02 +0000 (0:00:00.071) 0:00:17.742 ******* 2026-04-08 01:16:04.660552 | orchestrator | 2026-04-08 01:16:04.660558 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-08 01:16:04.660565 | orchestrator | Wednesday 08 April 2026 01:16:02 +0000 (0:00:00.075) 0:00:17.818 ******* 2026-04-08 01:16:04.660572 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:04.660578 | orchestrator | 2026-04-08 01:16:04.660584 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-08 01:16:04.660591 | orchestrator | Wednesday 08 April 2026 01:16:03 +0000 (0:00:01.368) 0:00:19.186 ******* 2026-04-08 01:16:04.660597 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-08 01:16:04.660604 | orchestrator |  "msg": [ 2026-04-08 01:16:04.660612 | orchestrator |  "Validator run completed.", 2026-04-08 01:16:04.660619 | orchestrator |  "You can find the report file here:", 2026-04-08 01:16:04.660626 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-08T01:15:46+00:00-report.json", 2026-04-08 01:16:04.660632 | orchestrator |  "on the following host:", 2026-04-08 01:16:04.660636 | orchestrator |  "testbed-manager" 2026-04-08 01:16:04.660641 | orchestrator |  ] 2026-04-08 01:16:04.660645 | orchestrator | } 2026-04-08 01:16:04.660650 | orchestrator | 2026-04-08 01:16:04.660654 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:16:04.660659 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-08 01:16:04.660665 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 01:16:04.660670 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 01:16:04.660674 | orchestrator | 2026-04-08 01:16:04.660678 | orchestrator | 2026-04-08 01:16:04.660682 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:16:04.660686 | orchestrator | Wednesday 08 April 2026 01:16:04 +0000 (0:00:00.413) 0:00:19.600 ******* 2026-04-08 01:16:04.660713 | orchestrator | =============================================================================== 2026-04-08 01:16:04.660718 | orchestrator | Aggregate test results step one ----------------------------------------- 2.00s 2026-04-08 01:16:04.660722 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.85s 2026-04-08 01:16:04.660726 | orchestrator | Get container info ------------------------------------------------------ 1.64s 2026-04-08 01:16:04.660730 | orchestrator | Gather status data ------------------------------------------------------ 1.49s 2026-04-08 01:16:04.660734 | orchestrator | Write report file ------------------------------------------------------- 1.37s 2026-04-08 01:16:04.660738 | orchestrator | Get timestamp for report file ------------------------------------------- 0.98s 2026-04-08 01:16:04.660742 | orchestrator | Aggregate test results step three --------------------------------------- 0.81s 2026-04-08 01:16:04.660746 | orchestrator | Create report output directory ------------------------------------------ 0.75s 2026-04-08 01:16:04.660752 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.56s 2026-04-08 01:16:04.660758 | orchestrator | Flush handlers ---------------------------------------------------------- 0.48s 2026-04-08 01:16:04.660764 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-04-08 01:16:04.660770 | orchestrator | Set quorum test data ---------------------------------------------------- 0.35s 2026-04-08 01:16:04.660776 | orchestrator | Set health test data ---------------------------------------------------- 0.34s 2026-04-08 01:16:04.660783 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.32s 2026-04-08 01:16:04.660787 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2026-04-08 01:16:04.660790 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2026-04-08 01:16:04.660794 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2026-04-08 01:16:04.660798 | orchestrator | Prepare status test vars ------------------------------------------------ 0.31s 2026-04-08 01:16:04.660802 | orchestrator | Set test result to passed if container is existing ---------------------- 0.30s 2026-04-08 01:16:04.660806 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-04-08 01:16:04.909101 | orchestrator | + osism validate ceph-mgrs 2026-04-08 01:16:34.191491 | orchestrator | 2026-04-08 01:16:34.191561 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-08 01:16:34.191570 | orchestrator | 2026-04-08 01:16:34.191576 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-08 01:16:34.191581 | orchestrator | Wednesday 08 April 2026 01:16:19 +0000 (0:00:00.512) 0:00:00.512 ******* 2026-04-08 01:16:34.191586 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:34.191591 | orchestrator | 2026-04-08 01:16:34.191596 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-08 01:16:34.191601 | orchestrator | Wednesday 08 April 2026 01:16:20 +0000 (0:00:01.010) 0:00:01.523 ******* 2026-04-08 01:16:34.191605 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:34.191610 | orchestrator | 2026-04-08 01:16:34.191620 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-08 01:16:34.191625 | orchestrator | Wednesday 08 April 2026 01:16:21 +0000 (0:00:00.710) 0:00:02.233 ******* 2026-04-08 01:16:34.191630 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:34.191635 | orchestrator | 2026-04-08 01:16:34.191640 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-08 01:16:34.191653 | orchestrator | Wednesday 08 April 2026 01:16:21 +0000 (0:00:00.144) 0:00:02.378 ******* 2026-04-08 01:16:34.191658 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:34.191667 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:16:34.191672 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:16:34.191678 | orchestrator | 2026-04-08 01:16:34.191686 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-08 01:16:34.191695 | orchestrator | Wednesday 08 April 2026 01:16:22 +0000 (0:00:00.282) 0:00:02.660 ******* 2026-04-08 01:16:34.191721 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:34.191729 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:16:34.191736 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:16:34.191744 | orchestrator | 2026-04-08 01:16:34.191751 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-08 01:16:34.191759 | orchestrator | Wednesday 08 April 2026 01:16:23 +0000 (0:00:01.530) 0:00:04.191 ******* 2026-04-08 01:16:34.191767 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:34.191775 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:16:34.191783 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:16:34.191790 | orchestrator | 2026-04-08 01:16:34.191801 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-08 01:16:34.191807 | orchestrator | Wednesday 08 April 2026 01:16:23 +0000 (0:00:00.310) 0:00:04.501 ******* 2026-04-08 01:16:34.191812 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:34.191816 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:16:34.191821 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:16:34.191825 | orchestrator | 2026-04-08 01:16:34.191830 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-08 01:16:34.191835 | orchestrator | Wednesday 08 April 2026 01:16:24 +0000 (0:00:00.303) 0:00:04.804 ******* 2026-04-08 01:16:34.191840 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:34.191844 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:16:34.191849 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:16:34.191853 | orchestrator | 2026-04-08 01:16:34.191858 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-08 01:16:34.191863 | orchestrator | Wednesday 08 April 2026 01:16:24 +0000 (0:00:00.309) 0:00:05.114 ******* 2026-04-08 01:16:34.191868 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:34.191873 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:16:34.191881 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:16:34.191889 | orchestrator | 2026-04-08 01:16:34.191896 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-08 01:16:34.191904 | orchestrator | Wednesday 08 April 2026 01:16:25 +0000 (0:00:00.451) 0:00:05.565 ******* 2026-04-08 01:16:34.191911 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:34.191919 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:16:34.191927 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:16:34.191935 | orchestrator | 2026-04-08 01:16:34.191942 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-08 01:16:34.191946 | orchestrator | Wednesday 08 April 2026 01:16:25 +0000 (0:00:00.307) 0:00:05.872 ******* 2026-04-08 01:16:34.191951 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:34.191956 | orchestrator | 2026-04-08 01:16:34.191961 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-08 01:16:34.191965 | orchestrator | Wednesday 08 April 2026 01:16:25 +0000 (0:00:00.234) 0:00:06.107 ******* 2026-04-08 01:16:34.191970 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:34.191975 | orchestrator | 2026-04-08 01:16:34.191980 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-08 01:16:34.191985 | orchestrator | Wednesday 08 April 2026 01:16:25 +0000 (0:00:00.236) 0:00:06.343 ******* 2026-04-08 01:16:34.191989 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:34.191994 | orchestrator | 2026-04-08 01:16:34.191999 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:16:34.192004 | orchestrator | Wednesday 08 April 2026 01:16:26 +0000 (0:00:00.250) 0:00:06.593 ******* 2026-04-08 01:16:34.192008 | orchestrator | 2026-04-08 01:16:34.192013 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:16:34.192018 | orchestrator | Wednesday 08 April 2026 01:16:26 +0000 (0:00:00.070) 0:00:06.664 ******* 2026-04-08 01:16:34.192022 | orchestrator | 2026-04-08 01:16:34.192027 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:16:34.192032 | orchestrator | Wednesday 08 April 2026 01:16:26 +0000 (0:00:00.071) 0:00:06.735 ******* 2026-04-08 01:16:34.192042 | orchestrator | 2026-04-08 01:16:34.192047 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-08 01:16:34.192052 | orchestrator | Wednesday 08 April 2026 01:16:26 +0000 (0:00:00.249) 0:00:06.985 ******* 2026-04-08 01:16:34.192056 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:34.192061 | orchestrator | 2026-04-08 01:16:34.192066 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-08 01:16:34.192070 | orchestrator | Wednesday 08 April 2026 01:16:26 +0000 (0:00:00.264) 0:00:07.250 ******* 2026-04-08 01:16:34.192075 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:34.192080 | orchestrator | 2026-04-08 01:16:34.192096 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-08 01:16:34.192101 | orchestrator | Wednesday 08 April 2026 01:16:26 +0000 (0:00:00.242) 0:00:07.493 ******* 2026-04-08 01:16:34.192106 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:34.192110 | orchestrator | 2026-04-08 01:16:34.192115 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-08 01:16:34.192120 | orchestrator | Wednesday 08 April 2026 01:16:27 +0000 (0:00:00.122) 0:00:07.615 ******* 2026-04-08 01:16:34.192124 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:16:34.192129 | orchestrator | 2026-04-08 01:16:34.192134 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-08 01:16:34.192138 | orchestrator | Wednesday 08 April 2026 01:16:28 +0000 (0:00:01.771) 0:00:09.387 ******* 2026-04-08 01:16:34.192143 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:34.192148 | orchestrator | 2026-04-08 01:16:34.192152 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-08 01:16:34.192157 | orchestrator | Wednesday 08 April 2026 01:16:29 +0000 (0:00:00.244) 0:00:09.631 ******* 2026-04-08 01:16:34.192162 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:34.192167 | orchestrator | 2026-04-08 01:16:34.192171 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-08 01:16:34.192176 | orchestrator | Wednesday 08 April 2026 01:16:29 +0000 (0:00:00.318) 0:00:09.949 ******* 2026-04-08 01:16:34.192181 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:34.192185 | orchestrator | 2026-04-08 01:16:34.192190 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-08 01:16:34.192209 | orchestrator | Wednesday 08 April 2026 01:16:29 +0000 (0:00:00.146) 0:00:10.096 ******* 2026-04-08 01:16:34.192214 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:16:34.192219 | orchestrator | 2026-04-08 01:16:34.192223 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-08 01:16:34.192228 | orchestrator | Wednesday 08 April 2026 01:16:29 +0000 (0:00:00.136) 0:00:10.232 ******* 2026-04-08 01:16:34.192233 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:34.192237 | orchestrator | 2026-04-08 01:16:34.192242 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-08 01:16:34.192247 | orchestrator | Wednesday 08 April 2026 01:16:29 +0000 (0:00:00.245) 0:00:10.478 ******* 2026-04-08 01:16:34.192254 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:16:34.192259 | orchestrator | 2026-04-08 01:16:34.192264 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-08 01:16:34.192268 | orchestrator | Wednesday 08 April 2026 01:16:30 +0000 (0:00:00.256) 0:00:10.734 ******* 2026-04-08 01:16:34.192273 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:34.192278 | orchestrator | 2026-04-08 01:16:34.192282 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-08 01:16:34.192287 | orchestrator | Wednesday 08 April 2026 01:16:31 +0000 (0:00:01.550) 0:00:12.285 ******* 2026-04-08 01:16:34.192292 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:34.192296 | orchestrator | 2026-04-08 01:16:34.192301 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-08 01:16:34.192306 | orchestrator | Wednesday 08 April 2026 01:16:31 +0000 (0:00:00.262) 0:00:12.547 ******* 2026-04-08 01:16:34.192313 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:34.192319 | orchestrator | 2026-04-08 01:16:34.192326 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:16:34.192338 | orchestrator | Wednesday 08 April 2026 01:16:32 +0000 (0:00:00.278) 0:00:12.825 ******* 2026-04-08 01:16:34.192346 | orchestrator | 2026-04-08 01:16:34.192353 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:16:34.192361 | orchestrator | Wednesday 08 April 2026 01:16:32 +0000 (0:00:00.068) 0:00:12.893 ******* 2026-04-08 01:16:34.192368 | orchestrator | 2026-04-08 01:16:34.192375 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:16:34.192383 | orchestrator | Wednesday 08 April 2026 01:16:32 +0000 (0:00:00.083) 0:00:12.977 ******* 2026-04-08 01:16:34.192390 | orchestrator | 2026-04-08 01:16:34.192397 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-08 01:16:34.192404 | orchestrator | Wednesday 08 April 2026 01:16:32 +0000 (0:00:00.074) 0:00:13.051 ******* 2026-04-08 01:16:34.192411 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:34.192417 | orchestrator | 2026-04-08 01:16:34.192425 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-08 01:16:34.192433 | orchestrator | Wednesday 08 April 2026 01:16:33 +0000 (0:00:01.286) 0:00:14.338 ******* 2026-04-08 01:16:34.192441 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-08 01:16:34.192467 | orchestrator |  "msg": [ 2026-04-08 01:16:34.192476 | orchestrator |  "Validator run completed.", 2026-04-08 01:16:34.192483 | orchestrator |  "You can find the report file here:", 2026-04-08 01:16:34.192488 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-08T01:16:20+00:00-report.json", 2026-04-08 01:16:34.192494 | orchestrator |  "on the following host:", 2026-04-08 01:16:34.192498 | orchestrator |  "testbed-manager" 2026-04-08 01:16:34.192503 | orchestrator |  ] 2026-04-08 01:16:34.192508 | orchestrator | } 2026-04-08 01:16:34.192513 | orchestrator | 2026-04-08 01:16:34.192517 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:16:34.192523 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-08 01:16:34.192528 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 01:16:34.192539 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 01:16:34.521644 | orchestrator | 2026-04-08 01:16:34.521696 | orchestrator | 2026-04-08 01:16:34.521702 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:16:34.521708 | orchestrator | Wednesday 08 April 2026 01:16:34 +0000 (0:00:00.395) 0:00:14.734 ******* 2026-04-08 01:16:34.521712 | orchestrator | =============================================================================== 2026-04-08 01:16:34.521716 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.77s 2026-04-08 01:16:34.521721 | orchestrator | Aggregate test results step one ----------------------------------------- 1.55s 2026-04-08 01:16:34.521725 | orchestrator | Get container info ------------------------------------------------------ 1.53s 2026-04-08 01:16:34.521729 | orchestrator | Write report file ------------------------------------------------------- 1.29s 2026-04-08 01:16:34.521733 | orchestrator | Get timestamp for report file ------------------------------------------- 1.01s 2026-04-08 01:16:34.521737 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2026-04-08 01:16:34.521741 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.45s 2026-04-08 01:16:34.521745 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-04-08 01:16:34.521773 | orchestrator | Flush handlers ---------------------------------------------------------- 0.39s 2026-04-08 01:16:34.521777 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2026-04-08 01:16:34.521781 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-04-08 01:16:34.521785 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-04-08 01:16:34.521789 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.31s 2026-04-08 01:16:34.521793 | orchestrator | Set test result to passed if container is existing ---------------------- 0.30s 2026-04-08 01:16:34.521797 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2026-04-08 01:16:34.521801 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2026-04-08 01:16:34.521806 | orchestrator | Print report file information ------------------------------------------- 0.26s 2026-04-08 01:16:34.521810 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2026-04-08 01:16:34.521814 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.26s 2026-04-08 01:16:34.521818 | orchestrator | Aggregate test results step three --------------------------------------- 0.25s 2026-04-08 01:16:34.720168 | orchestrator | + osism validate ceph-osds 2026-04-08 01:16:53.750607 | orchestrator | 2026-04-08 01:16:53.750672 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-08 01:16:53.750682 | orchestrator | 2026-04-08 01:16:53.750689 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-08 01:16:53.750696 | orchestrator | Wednesday 08 April 2026 01:16:49 +0000 (0:00:00.534) 0:00:00.534 ******* 2026-04-08 01:16:53.750704 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:53.750711 | orchestrator | 2026-04-08 01:16:53.750717 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-08 01:16:53.750724 | orchestrator | Wednesday 08 April 2026 01:16:50 +0000 (0:00:01.064) 0:00:01.599 ******* 2026-04-08 01:16:53.750731 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:53.750738 | orchestrator | 2026-04-08 01:16:53.750745 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-08 01:16:53.750751 | orchestrator | Wednesday 08 April 2026 01:16:50 +0000 (0:00:00.238) 0:00:01.838 ******* 2026-04-08 01:16:53.750758 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-08 01:16:53.750765 | orchestrator | 2026-04-08 01:16:53.750773 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-08 01:16:53.750780 | orchestrator | Wednesday 08 April 2026 01:16:51 +0000 (0:00:00.662) 0:00:02.500 ******* 2026-04-08 01:16:53.750788 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:16:53.750796 | orchestrator | 2026-04-08 01:16:53.750803 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-08 01:16:53.750811 | orchestrator | Wednesday 08 April 2026 01:16:51 +0000 (0:00:00.119) 0:00:02.620 ******* 2026-04-08 01:16:53.750818 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:16:53.750829 | orchestrator | 2026-04-08 01:16:53.750838 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-08 01:16:53.750845 | orchestrator | Wednesday 08 April 2026 01:16:51 +0000 (0:00:00.133) 0:00:02.753 ******* 2026-04-08 01:16:53.750852 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:16:53.750858 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:16:53.750865 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:16:53.750872 | orchestrator | 2026-04-08 01:16:53.750880 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-08 01:16:53.750887 | orchestrator | Wednesday 08 April 2026 01:16:52 +0000 (0:00:00.454) 0:00:03.208 ******* 2026-04-08 01:16:53.750894 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:16:53.750902 | orchestrator | 2026-04-08 01:16:53.750909 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-08 01:16:53.750934 | orchestrator | Wednesday 08 April 2026 01:16:52 +0000 (0:00:00.146) 0:00:03.354 ******* 2026-04-08 01:16:53.750942 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:16:53.750948 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:16:53.750955 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:16:53.750962 | orchestrator | 2026-04-08 01:16:53.750970 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-08 01:16:53.750977 | orchestrator | Wednesday 08 April 2026 01:16:52 +0000 (0:00:00.354) 0:00:03.708 ******* 2026-04-08 01:16:53.750984 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:16:53.750991 | orchestrator | 2026-04-08 01:16:53.751011 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-08 01:16:53.751019 | orchestrator | Wednesday 08 April 2026 01:16:53 +0000 (0:00:00.385) 0:00:04.094 ******* 2026-04-08 01:16:53.751027 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:16:53.751035 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:16:53.751043 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:16:53.751050 | orchestrator | 2026-04-08 01:16:53.751058 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-08 01:16:53.751065 | orchestrator | Wednesday 08 April 2026 01:16:53 +0000 (0:00:00.279) 0:00:04.373 ******* 2026-04-08 01:16:53.751074 | orchestrator | skipping: [testbed-node-3] => (item={'id': '86e1ed0250a4506c280da9d10bc40560fe280f70b8d5a9fb097049559041f1ce', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-08 01:16:53.751084 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5b0d59db7282c60e5d1565a5bb52376ae1d42b9bf7d250861ee96d2bb78f2cb9', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-08 01:16:53.751092 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd5cb8f2bc82812acac383f73869b7d58ff2866f97c816400dd058f74c9776d3c', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-08 01:16:53.751099 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ee77b222d8e904b3d7235332cf8b55140b7e2a10c8c63465887441ce3d4f3ac2', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-08 01:16:53.751115 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4d5e1054548357c95a82b216a640932e339809b227133e323ef5b2c6c0b064e8', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-08 01:16:53.751136 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3e944eb9cc252eef7232faaf36a20455af6871a6dd8c036dd4f61275df9d21cb', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-08 01:16:53.751145 | orchestrator | skipping: [testbed-node-3] => (item={'id': '441e1bc3009934d05d1098efbcdfd228e9dadeaa1d61ed2c1875d34242ebfc7f', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-08 01:16:53.751153 | orchestrator | skipping: [testbed-node-3] => (item={'id': '997d08f503e8a12a957522271fbeaba08cd5417024e671561e06f8993446c649', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-08 01:16:53.751160 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7fef1833953ed4e35d731491fbc1bfe2952e5e4c8ff1f40361ad1b5b672cfd34', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-08 01:16:53.751167 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'db7984756f61376489e8c0b515a586c40db1efde591346feac17462469496218', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-08 01:16:53.751182 | orchestrator | ok: [testbed-node-3] => (item={'id': 'b301a4c056b302eefc20990e656995f074d9dd7ba1744468f206db92b190a571', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-08 01:16:53.751190 | orchestrator | ok: [testbed-node-3] => (item={'id': '7404fd4bd759da4b8bcd2ebdabf7c4a56c3873a240eb3f418e119becb244ef07', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-08 01:16:53.751199 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd204a38fcbfe86faeaeb6c2afeb6f0c22d1b0708ccd198db817586dbafdc59c5', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-08 01:16:53.751207 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e583a63a143075b26c789684cf7e25a79a3fd901d82e26102e8e86245b177810', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-04-08 01:16:53.751216 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1f1d45191048ae29b28003ca8ceb202113811cd337be231d4cdea56c6f8a0179', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-04-08 01:16:53.751224 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f389b55be2c3fefc10f9e0f2449aa5ed77708c1e04a44f9fc0ff376b2e224f4f', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-08 01:16:53.751232 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7211e52df506e36f4d04d48709256942e8864a6cb6a251b9e9c8a395931c1060', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-08 01:16:53.751240 | orchestrator | skipping: [testbed-node-3] => (item={'id': '99f4444b96e8e28703e7e3eb74992bb9bb636064969a25abfee28a774723a90b', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-08 01:16:53.751247 | orchestrator | skipping: [testbed-node-4] => (item={'id': '07cab0b119b9f9ca2e3dca462f02f8e6fd3edf4d395b990d06d779904ec7c42c', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-08 01:16:53.751255 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3db051d4f6c8fd24ad12e42b44d9aef00c701ea345bee1eea0294019798a3d94', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-08 01:16:53.751267 | orchestrator | skipping: [testbed-node-4] => (item={'id': '89d70fc2931d1259f09072942e8b34b76b5923fa89e21ba4ebf3883d4900977a', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-08 01:16:53.751283 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3352896db4ffc2f7e31cdc5dc5b8e04f21305403f69a9575071f9a3c0944da4e', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-08 01:16:53.921725 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b5246b40a06d27c90e54f1b6ad73eccf4d190efa19d115bbfddae227a7840b13', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-08 01:16:53.921789 | orchestrator | skipping: [testbed-node-4] => (item={'id': '007d4d24f3704aa00e6657b5a7e88f6694a3be8cf999fa711c564a590479b235', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-08 01:16:53.921810 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4610c3873ff9c87ec3cd330e59ccec0b9492a547440b849c9203cdba0e3b9809', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-08 01:16:53.921818 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8f9e6ef640a6423abede733ddca9543ad3c9f66f76f4fc5254724f5ff9b536a1', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-08 01:16:53.921825 | orchestrator | skipping: [testbed-node-4] => (item={'id': '146419daa59db712b6af8a4c52fc55a2a9cb992c9b13e568e0844ea668708684', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-08 01:16:53.921833 | orchestrator | skipping: [testbed-node-4] => (item={'id': '425a250b45f1dddb2cd5698f751a6b705d1e629da7a3323ba0363c8df44cee45', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-08 01:16:53.921842 | orchestrator | ok: [testbed-node-4] => (item={'id': '940fc0ca506374885c6d19463763b977a21c421c6cbd6a16f1bf24b17ba60eaa', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-08 01:16:53.921849 | orchestrator | ok: [testbed-node-4] => (item={'id': '4a7516d258e440024c559a8f900d2e623d4f83b2e20f7aa12badc9faedff4ffc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-08 01:16:53.921856 | orchestrator | skipping: [testbed-node-4] => (item={'id': '90a817754c94e3a9f9ca8267770c9643b82dd4fdfb0cb2ee6b7b78145d73f12b', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-08 01:16:53.921863 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1e3951c0584a74717f3173653557a689d3cbe2fde26265cb3c700a772f507a9e', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-04-08 01:16:53.921870 | orchestrator | skipping: [testbed-node-4] => (item={'id': '28b3e33935278b2f6142ce6fe32728e6e0880a5f5b77c3837081e429176d2c9a', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-04-08 01:16:53.921877 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'efd6ef37147d911e3dc5900c1ffed7a700c49cfa7e74a55ec15a3d8262185730', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-08 01:16:53.921883 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd8c4ae0b7cb9972f5e46dcdb8be06ad0655f09550dbc25aa1abbea8f6f63c883', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-08 01:16:53.921891 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8fc68f2da1615c89131e6a67a07c5c9414361a4fb56d89f4317e066f684a327b', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-08 01:16:53.921898 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e68da6caa2bd1083dc3a270143c9695f75fca1b222bd62d08aeaaba249b3f975', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-08 01:16:53.921914 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd7528eed81c75cf9d5f8609022dba0fb727441bcfb2b479af17cf25d7fac6591', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-08 01:16:53.921926 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7c9fdff7eaf5ce9dcb769ef9b0532f80b873a925d0f0df2ec32693b15b84fc31', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-08 01:16:53.921933 | orchestrator | skipping: [testbed-node-5] => (item={'id': '96f1c1e440cbf0033a71dc735548074dacd7ea399f7838326ffd1526b359181f', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-08 01:16:53.921951 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e18be8a88ec5522390115162c22660e2f7b45eca7940bfabc11bb4e59bcfb399', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-08 01:16:53.921958 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0aae6fdd3c0b9e8bdf0f94dd0efee2cf353c38a2965da9af3f8cfc3a07c3b501', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-08 01:16:53.921965 | orchestrator | skipping: [testbed-node-5] => (item={'id': '89c832fdc9f53f05849e5f7d97c7a5f3dcc67aeb61a437c2684aceb4737dac86', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-08 01:16:53.921972 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9581a34597c1c5c2761c549819001e56b6fd51b9d47d63a7e169aa845e712ab8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-08 01:16:53.921978 | orchestrator | skipping: [testbed-node-5] => (item={'id': '46cd4ccf01dbaf1e3f126b24f3b0f88916ef85490c270596222f699a4a16f8a1', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-08 01:16:53.921985 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fa1ff1c93bf2b7dbd11b4e945ba054e517d1adcaf416256807752d869b86744e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-08 01:16:53.921997 | orchestrator | ok: [testbed-node-5] => (item={'id': '9d0f3a057c19ad07263c61ef54b3d3d6898b2efbc71319c4693b66180eca737e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-08 01:16:53.922004 | orchestrator | ok: [testbed-node-5] => (item={'id': 'c9fdc010ac4b3dbc767c8f790c7eace02ef38baed260fac727e127b3ff72c576', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-08 01:16:53.922057 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd7414fa187af8138c8684e2a6171c4ba89df569c1a8852ac4ce3b0b3dd2f61a2', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-08 01:16:53.922064 | orchestrator | skipping: [testbed-node-5] => (item={'id': '43dce8c979e122259a12697a52a0179569b9aacd6a1a332a28a85306f6b4d903', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-04-08 01:16:53.922071 | orchestrator | skipping: [testbed-node-5] => (item={'id': '11ee459418d6acb219a2da4239f04570cb83838110fa700e59952f1fea45520f', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-04-08 01:16:53.922080 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f4ebbd81e80322fc922440d6e46fc8f257d6fd95dacfcf18ed6ef0209259f799', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-08 01:16:53.922090 | orchestrator | skipping: [testbed-node-5] => (item={'id': '52dfa3163fc63348c7ccb281a75048b4f68e87bf4cbf8d12044d178fe41f6681', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-08 01:16:53.922103 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'acfb72db25c8d2b84dd6b6c99d104cb03d5d72486644dfc7acc8b7a3d03ea3f9', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-08 01:17:06.827040 | orchestrator | 2026-04-08 01:17:06.827117 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-08 01:17:06.827125 | orchestrator | Wednesday 08 April 2026 01:16:54 +0000 (0:00:00.636) 0:00:05.009 ******* 2026-04-08 01:17:06.827129 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:06.827134 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:17:06.827139 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:17:06.827143 | orchestrator | 2026-04-08 01:17:06.827147 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-08 01:17:06.827151 | orchestrator | Wednesday 08 April 2026 01:16:54 +0000 (0:00:00.308) 0:00:05.318 ******* 2026-04-08 01:17:06.827156 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:17:06.827161 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:17:06.827165 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:17:06.827168 | orchestrator | 2026-04-08 01:17:06.827173 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-08 01:17:06.827177 | orchestrator | Wednesday 08 April 2026 01:16:54 +0000 (0:00:00.291) 0:00:05.609 ******* 2026-04-08 01:17:06.827181 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:06.827185 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:17:06.827189 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:17:06.827193 | orchestrator | 2026-04-08 01:17:06.827196 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-08 01:17:06.827200 | orchestrator | Wednesday 08 April 2026 01:16:55 +0000 (0:00:00.290) 0:00:05.900 ******* 2026-04-08 01:17:06.827204 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:06.827208 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:17:06.827212 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:17:06.827216 | orchestrator | 2026-04-08 01:17:06.827220 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-08 01:17:06.827224 | orchestrator | Wednesday 08 April 2026 01:16:55 +0000 (0:00:00.453) 0:00:06.353 ******* 2026-04-08 01:17:06.827228 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-08 01:17:06.827233 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-08 01:17:06.827237 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:17:06.827241 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-08 01:17:06.827245 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-08 01:17:06.827249 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:17:06.827253 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-08 01:17:06.827257 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-08 01:17:06.827261 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:17:06.827264 | orchestrator | 2026-04-08 01:17:06.827268 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-08 01:17:06.827272 | orchestrator | Wednesday 08 April 2026 01:16:55 +0000 (0:00:00.345) 0:00:06.699 ******* 2026-04-08 01:17:06.827276 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:06.827280 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:17:06.827300 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:17:06.827304 | orchestrator | 2026-04-08 01:17:06.827308 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-08 01:17:06.827312 | orchestrator | Wednesday 08 April 2026 01:16:56 +0000 (0:00:00.287) 0:00:06.986 ******* 2026-04-08 01:17:06.827316 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:17:06.827320 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:17:06.827324 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:17:06.827328 | orchestrator | 2026-04-08 01:17:06.827332 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-08 01:17:06.827335 | orchestrator | Wednesday 08 April 2026 01:16:56 +0000 (0:00:00.320) 0:00:07.307 ******* 2026-04-08 01:17:06.827339 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:17:06.827343 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:17:06.827347 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:17:06.827351 | orchestrator | 2026-04-08 01:17:06.827355 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-08 01:17:06.827359 | orchestrator | Wednesday 08 April 2026 01:16:56 +0000 (0:00:00.455) 0:00:07.763 ******* 2026-04-08 01:17:06.827363 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:06.827367 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:17:06.827371 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:17:06.827375 | orchestrator | 2026-04-08 01:17:06.827378 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-08 01:17:06.827389 | orchestrator | Wednesday 08 April 2026 01:16:57 +0000 (0:00:00.329) 0:00:08.092 ******* 2026-04-08 01:17:06.827393 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:17:06.827397 | orchestrator | 2026-04-08 01:17:06.827401 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-08 01:17:06.827405 | orchestrator | Wednesday 08 April 2026 01:16:57 +0000 (0:00:00.260) 0:00:08.352 ******* 2026-04-08 01:17:06.827419 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:17:06.827423 | orchestrator | 2026-04-08 01:17:06.827476 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-08 01:17:06.827481 | orchestrator | Wednesday 08 April 2026 01:16:57 +0000 (0:00:00.237) 0:00:08.590 ******* 2026-04-08 01:17:06.827485 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:17:06.827488 | orchestrator | 2026-04-08 01:17:06.827492 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:17:06.827496 | orchestrator | Wednesday 08 April 2026 01:16:57 +0000 (0:00:00.239) 0:00:08.830 ******* 2026-04-08 01:17:06.827500 | orchestrator | 2026-04-08 01:17:06.827504 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:17:06.827508 | orchestrator | Wednesday 08 April 2026 01:16:58 +0000 (0:00:00.069) 0:00:08.900 ******* 2026-04-08 01:17:06.827512 | orchestrator | 2026-04-08 01:17:06.827516 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:17:06.827529 | orchestrator | Wednesday 08 April 2026 01:16:58 +0000 (0:00:00.069) 0:00:08.970 ******* 2026-04-08 01:17:06.827533 | orchestrator | 2026-04-08 01:17:06.827537 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-08 01:17:06.827541 | orchestrator | Wednesday 08 April 2026 01:16:58 +0000 (0:00:00.068) 0:00:09.038 ******* 2026-04-08 01:17:06.827545 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:17:06.827549 | orchestrator | 2026-04-08 01:17:06.827553 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-08 01:17:06.827556 | orchestrator | Wednesday 08 April 2026 01:16:58 +0000 (0:00:00.624) 0:00:09.663 ******* 2026-04-08 01:17:06.827560 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:17:06.827564 | orchestrator | 2026-04-08 01:17:06.827568 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-08 01:17:06.827572 | orchestrator | Wednesday 08 April 2026 01:16:59 +0000 (0:00:00.254) 0:00:09.917 ******* 2026-04-08 01:17:06.827576 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:06.827580 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:17:06.827588 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:17:06.827592 | orchestrator | 2026-04-08 01:17:06.827596 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-08 01:17:06.827600 | orchestrator | Wednesday 08 April 2026 01:16:59 +0000 (0:00:00.313) 0:00:10.231 ******* 2026-04-08 01:17:06.827604 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:06.827609 | orchestrator | 2026-04-08 01:17:06.827614 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-08 01:17:06.827618 | orchestrator | Wednesday 08 April 2026 01:16:59 +0000 (0:00:00.214) 0:00:10.445 ******* 2026-04-08 01:17:06.827623 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-08 01:17:06.827627 | orchestrator | 2026-04-08 01:17:06.827632 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-08 01:17:06.827636 | orchestrator | Wednesday 08 April 2026 01:17:01 +0000 (0:00:01.986) 0:00:12.432 ******* 2026-04-08 01:17:06.827641 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:06.827645 | orchestrator | 2026-04-08 01:17:06.827650 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-08 01:17:06.827655 | orchestrator | Wednesday 08 April 2026 01:17:01 +0000 (0:00:00.125) 0:00:12.557 ******* 2026-04-08 01:17:06.827660 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:06.827664 | orchestrator | 2026-04-08 01:17:06.827669 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-08 01:17:06.827674 | orchestrator | Wednesday 08 April 2026 01:17:02 +0000 (0:00:00.287) 0:00:12.845 ******* 2026-04-08 01:17:06.827678 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:17:06.827683 | orchestrator | 2026-04-08 01:17:06.827687 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-08 01:17:06.827692 | orchestrator | Wednesday 08 April 2026 01:17:02 +0000 (0:00:00.128) 0:00:12.974 ******* 2026-04-08 01:17:06.827696 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:06.827701 | orchestrator | 2026-04-08 01:17:06.827705 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-08 01:17:06.827710 | orchestrator | Wednesday 08 April 2026 01:17:02 +0000 (0:00:00.146) 0:00:13.120 ******* 2026-04-08 01:17:06.827714 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:06.827719 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:17:06.827724 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:17:06.827728 | orchestrator | 2026-04-08 01:17:06.827733 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-08 01:17:06.827737 | orchestrator | Wednesday 08 April 2026 01:17:02 +0000 (0:00:00.448) 0:00:13.568 ******* 2026-04-08 01:17:06.827742 | orchestrator | changed: [testbed-node-3] 2026-04-08 01:17:06.827747 | orchestrator | changed: [testbed-node-4] 2026-04-08 01:17:06.827751 | orchestrator | changed: [testbed-node-5] 2026-04-08 01:17:06.827756 | orchestrator | 2026-04-08 01:17:06.827760 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-08 01:17:06.827765 | orchestrator | Wednesday 08 April 2026 01:17:04 +0000 (0:00:01.779) 0:00:15.347 ******* 2026-04-08 01:17:06.827769 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:06.827773 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:17:06.827778 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:17:06.827782 | orchestrator | 2026-04-08 01:17:06.827787 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-08 01:17:06.827792 | orchestrator | Wednesday 08 April 2026 01:17:04 +0000 (0:00:00.290) 0:00:15.638 ******* 2026-04-08 01:17:06.827797 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:06.827802 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:17:06.827806 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:17:06.827810 | orchestrator | 2026-04-08 01:17:06.827815 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-08 01:17:06.827819 | orchestrator | Wednesday 08 April 2026 01:17:05 +0000 (0:00:00.468) 0:00:16.107 ******* 2026-04-08 01:17:06.827824 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:17:06.827829 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:17:06.827837 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:17:06.827842 | orchestrator | 2026-04-08 01:17:06.827847 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-08 01:17:06.827851 | orchestrator | Wednesday 08 April 2026 01:17:05 +0000 (0:00:00.460) 0:00:16.567 ******* 2026-04-08 01:17:06.827860 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:06.827864 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:17:06.827869 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:17:06.827874 | orchestrator | 2026-04-08 01:17:06.827878 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-08 01:17:06.827883 | orchestrator | Wednesday 08 April 2026 01:17:06 +0000 (0:00:00.301) 0:00:16.869 ******* 2026-04-08 01:17:06.827887 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:17:06.827892 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:17:06.827897 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:17:06.827901 | orchestrator | 2026-04-08 01:17:06.827906 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-08 01:17:06.827911 | orchestrator | Wednesday 08 April 2026 01:17:06 +0000 (0:00:00.322) 0:00:17.191 ******* 2026-04-08 01:17:06.827915 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:17:06.827920 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:17:06.827924 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:17:06.827929 | orchestrator | 2026-04-08 01:17:06.827936 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-08 01:17:13.991572 | orchestrator | Wednesday 08 April 2026 01:17:06 +0000 (0:00:00.468) 0:00:17.660 ******* 2026-04-08 01:17:13.991666 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:13.991675 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:17:13.991680 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:17:13.991685 | orchestrator | 2026-04-08 01:17:13.991691 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-08 01:17:13.991696 | orchestrator | Wednesday 08 April 2026 01:17:07 +0000 (0:00:00.491) 0:00:18.151 ******* 2026-04-08 01:17:13.991700 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:13.991705 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:17:13.991709 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:17:13.991715 | orchestrator | 2026-04-08 01:17:13.991722 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-08 01:17:13.991729 | orchestrator | Wednesday 08 April 2026 01:17:07 +0000 (0:00:00.506) 0:00:18.658 ******* 2026-04-08 01:17:13.991739 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:13.991746 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:17:13.991753 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:17:13.991759 | orchestrator | 2026-04-08 01:17:13.991766 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-08 01:17:13.991773 | orchestrator | Wednesday 08 April 2026 01:17:08 +0000 (0:00:00.304) 0:00:18.962 ******* 2026-04-08 01:17:13.991780 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:17:13.991787 | orchestrator | skipping: [testbed-node-4] 2026-04-08 01:17:13.991793 | orchestrator | skipping: [testbed-node-5] 2026-04-08 01:17:13.991799 | orchestrator | 2026-04-08 01:17:13.991805 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-08 01:17:13.991812 | orchestrator | Wednesday 08 April 2026 01:17:08 +0000 (0:00:00.444) 0:00:19.407 ******* 2026-04-08 01:17:13.991819 | orchestrator | ok: [testbed-node-3] 2026-04-08 01:17:13.991826 | orchestrator | ok: [testbed-node-4] 2026-04-08 01:17:13.991833 | orchestrator | ok: [testbed-node-5] 2026-04-08 01:17:13.991839 | orchestrator | 2026-04-08 01:17:13.991847 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-08 01:17:13.991854 | orchestrator | Wednesday 08 April 2026 01:17:08 +0000 (0:00:00.316) 0:00:19.724 ******* 2026-04-08 01:17:13.991861 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-08 01:17:13.991868 | orchestrator | 2026-04-08 01:17:13.991876 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-08 01:17:13.991883 | orchestrator | Wednesday 08 April 2026 01:17:09 +0000 (0:00:00.244) 0:00:19.969 ******* 2026-04-08 01:17:13.991905 | orchestrator | skipping: [testbed-node-3] 2026-04-08 01:17:13.991910 | orchestrator | 2026-04-08 01:17:13.991917 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-08 01:17:13.991926 | orchestrator | Wednesday 08 April 2026 01:17:09 +0000 (0:00:00.243) 0:00:20.212 ******* 2026-04-08 01:17:13.991936 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-08 01:17:13.991942 | orchestrator | 2026-04-08 01:17:13.991949 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-08 01:17:13.991955 | orchestrator | Wednesday 08 April 2026 01:17:11 +0000 (0:00:01.747) 0:00:21.960 ******* 2026-04-08 01:17:13.991962 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-08 01:17:13.991969 | orchestrator | 2026-04-08 01:17:13.991975 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-08 01:17:13.991980 | orchestrator | Wednesday 08 April 2026 01:17:11 +0000 (0:00:00.258) 0:00:22.218 ******* 2026-04-08 01:17:13.991984 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-08 01:17:13.991989 | orchestrator | 2026-04-08 01:17:13.991993 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:17:13.991998 | orchestrator | Wednesday 08 April 2026 01:17:11 +0000 (0:00:00.268) 0:00:22.487 ******* 2026-04-08 01:17:13.992002 | orchestrator | 2026-04-08 01:17:13.992006 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:17:13.992010 | orchestrator | Wednesday 08 April 2026 01:17:11 +0000 (0:00:00.068) 0:00:22.555 ******* 2026-04-08 01:17:13.992015 | orchestrator | 2026-04-08 01:17:13.992019 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-08 01:17:13.992023 | orchestrator | Wednesday 08 April 2026 01:17:11 +0000 (0:00:00.249) 0:00:22.804 ******* 2026-04-08 01:17:13.992028 | orchestrator | 2026-04-08 01:17:13.992032 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-08 01:17:13.992036 | orchestrator | Wednesday 08 April 2026 01:17:12 +0000 (0:00:00.104) 0:00:22.909 ******* 2026-04-08 01:17:13.992040 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-08 01:17:13.992045 | orchestrator | 2026-04-08 01:17:13.992051 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-08 01:17:13.992060 | orchestrator | Wednesday 08 April 2026 01:17:13 +0000 (0:00:01.255) 0:00:24.165 ******* 2026-04-08 01:17:13.992069 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-08 01:17:13.992075 | orchestrator |  "msg": [ 2026-04-08 01:17:13.992082 | orchestrator |  "Validator run completed.", 2026-04-08 01:17:13.992089 | orchestrator |  "You can find the report file here:", 2026-04-08 01:17:13.992096 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-08T01:16:50+00:00-report.json", 2026-04-08 01:17:13.992104 | orchestrator |  "on the following host:", 2026-04-08 01:17:13.992111 | orchestrator |  "testbed-manager" 2026-04-08 01:17:13.992118 | orchestrator |  ] 2026-04-08 01:17:13.992124 | orchestrator | } 2026-04-08 01:17:13.992131 | orchestrator | 2026-04-08 01:17:13.992138 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:17:13.992147 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-08 01:17:13.992157 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-08 01:17:13.992180 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-08 01:17:13.992188 | orchestrator | 2026-04-08 01:17:13.992195 | orchestrator | 2026-04-08 01:17:13.992202 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:17:13.992259 | orchestrator | Wednesday 08 April 2026 01:17:13 +0000 (0:00:00.388) 0:00:24.553 ******* 2026-04-08 01:17:13.992266 | orchestrator | =============================================================================== 2026-04-08 01:17:13.992271 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.99s 2026-04-08 01:17:13.992276 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.78s 2026-04-08 01:17:13.992281 | orchestrator | Aggregate test results step one ----------------------------------------- 1.75s 2026-04-08 01:17:13.992286 | orchestrator | Write report file ------------------------------------------------------- 1.26s 2026-04-08 01:17:13.992291 | orchestrator | Get timestamp for report file ------------------------------------------- 1.06s 2026-04-08 01:17:13.992296 | orchestrator | Create report output directory ------------------------------------------ 0.66s 2026-04-08 01:17:13.992301 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.64s 2026-04-08 01:17:13.992307 | orchestrator | Print report file information ------------------------------------------- 0.62s 2026-04-08 01:17:13.992312 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.51s 2026-04-08 01:17:13.992317 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2026-04-08 01:17:13.992322 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.47s 2026-04-08 01:17:13.992327 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.47s 2026-04-08 01:17:13.992332 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.46s 2026-04-08 01:17:13.992337 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.46s 2026-04-08 01:17:13.992342 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.45s 2026-04-08 01:17:13.992348 | orchestrator | Prepare test data ------------------------------------------------------- 0.45s 2026-04-08 01:17:13.992353 | orchestrator | Prepare test data ------------------------------------------------------- 0.45s 2026-04-08 01:17:13.992368 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.44s 2026-04-08 01:17:13.992372 | orchestrator | Flush handlers ---------------------------------------------------------- 0.42s 2026-04-08 01:17:13.992377 | orchestrator | Print report file information ------------------------------------------- 0.39s 2026-04-08 01:17:14.200776 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-08 01:17:14.210720 | orchestrator | + set -e 2026-04-08 01:17:14.210875 | orchestrator | + source /opt/manager-vars.sh 2026-04-08 01:17:14.210886 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-08 01:17:14.210893 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-08 01:17:14.210898 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-08 01:17:14.210904 | orchestrator | ++ CEPH_VERSION=reef 2026-04-08 01:17:14.210911 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-08 01:17:14.210917 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-08 01:17:14.210924 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-08 01:17:14.210934 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-08 01:17:14.210946 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-08 01:17:14.210959 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-08 01:17:14.210968 | orchestrator | ++ export ARA=false 2026-04-08 01:17:14.210977 | orchestrator | ++ ARA=false 2026-04-08 01:17:14.210985 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-08 01:17:14.210994 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-08 01:17:14.211003 | orchestrator | ++ export TEMPEST=true 2026-04-08 01:17:14.211012 | orchestrator | ++ TEMPEST=true 2026-04-08 01:17:14.211021 | orchestrator | ++ export IS_ZUUL=true 2026-04-08 01:17:14.211030 | orchestrator | ++ IS_ZUUL=true 2026-04-08 01:17:14.211041 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.187 2026-04-08 01:17:14.211047 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.187 2026-04-08 01:17:14.211053 | orchestrator | ++ export EXTERNAL_API=false 2026-04-08 01:17:14.211058 | orchestrator | ++ EXTERNAL_API=false 2026-04-08 01:17:14.211064 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-08 01:17:14.211070 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-08 01:17:14.211075 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-08 01:17:14.211081 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-08 01:17:14.211087 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-08 01:17:14.211112 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-08 01:17:14.211118 | orchestrator | + source /etc/os-release 2026-04-08 01:17:14.211124 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-08 01:17:14.211130 | orchestrator | ++ NAME=Ubuntu 2026-04-08 01:17:14.211135 | orchestrator | ++ VERSION_ID=24.04 2026-04-08 01:17:14.211150 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-08 01:17:14.211155 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-08 01:17:14.211161 | orchestrator | ++ ID=ubuntu 2026-04-08 01:17:14.211167 | orchestrator | ++ ID_LIKE=debian 2026-04-08 01:17:14.211173 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-08 01:17:14.211179 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-08 01:17:14.211184 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-08 01:17:14.211190 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-08 01:17:14.211197 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-08 01:17:14.211202 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-08 01:17:14.211208 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-08 01:17:14.211225 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-08 01:17:14.211233 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-08 01:17:14.255576 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-08 01:17:38.661824 | orchestrator | 2026-04-08 01:17:38.661903 | orchestrator | # Status of Elasticsearch 2026-04-08 01:17:38.661910 | orchestrator | 2026-04-08 01:17:38.661915 | orchestrator | + pushd /opt/configuration/contrib 2026-04-08 01:17:38.661920 | orchestrator | + echo 2026-04-08 01:17:38.661925 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-08 01:17:38.661929 | orchestrator | + echo 2026-04-08 01:17:38.661934 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-08 01:17:38.849078 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-08 01:17:38.849162 | orchestrator | 2026-04-08 01:17:38.849173 | orchestrator | # Status of MariaDB 2026-04-08 01:17:38.849180 | orchestrator | 2026-04-08 01:17:38.849187 | orchestrator | + echo 2026-04-08 01:17:38.849194 | orchestrator | + echo '# Status of MariaDB' 2026-04-08 01:17:38.849200 | orchestrator | + echo 2026-04-08 01:17:38.849811 | orchestrator | ++ semver latest 10.0.0-0 2026-04-08 01:17:38.899172 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-08 01:17:38.899244 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-08 01:17:38.899250 | orchestrator | + osism status database 2026-04-08 01:17:40.598569 | orchestrator | 2026-04-08 01:17:40 | ERROR  | Unable to get ansible vault password 2026-04-08 01:17:40.598647 | orchestrator | 2026-04-08 01:17:40 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:17:40.598656 | orchestrator | 2026-04-08 01:17:40 | ERROR  | Dropping encrypted entries 2026-04-08 01:17:40.634406 | orchestrator | 2026-04-08 01:17:40 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-04-08 01:17:40.645460 | orchestrator | 2026-04-08 01:17:40 | INFO  | Cluster Status: Primary 2026-04-08 01:17:40.645582 | orchestrator | 2026-04-08 01:17:40 | INFO  | Connected: ON 2026-04-08 01:17:40.645592 | orchestrator | 2026-04-08 01:17:40 | INFO  | Ready: ON 2026-04-08 01:17:40.645598 | orchestrator | 2026-04-08 01:17:40 | INFO  | Cluster Size: 3 2026-04-08 01:17:40.645604 | orchestrator | 2026-04-08 01:17:40 | INFO  | Local State: Synced 2026-04-08 01:17:40.645611 | orchestrator | 2026-04-08 01:17:40 | INFO  | Cluster State UUID: 6ea8c907-32e5-11f1-9113-178db2fd5a53 2026-04-08 01:17:40.645667 | orchestrator | 2026-04-08 01:17:40 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-04-08 01:17:40.645709 | orchestrator | 2026-04-08 01:17:40 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-04-08 01:17:40.645745 | orchestrator | 2026-04-08 01:17:40 | INFO  | Local Node UUID: a2b2621c-32e5-11f1-9f58-cecef50a9606 2026-04-08 01:17:40.645762 | orchestrator | 2026-04-08 01:17:40 | INFO  | Flow Control Paused: 0.00% 2026-04-08 01:17:40.645770 | orchestrator | 2026-04-08 01:17:40 | INFO  | Recv Queue Avg: 0 2026-04-08 01:17:40.645784 | orchestrator | 2026-04-08 01:17:40 | INFO  | Send Queue Avg: 0.00059988 2026-04-08 01:17:40.645837 | orchestrator | 2026-04-08 01:17:40 | INFO  | Transactions: 4425 local commits, 6610 replicated, 88 received 2026-04-08 01:17:40.645846 | orchestrator | 2026-04-08 01:17:40 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-04-08 01:17:40.645853 | orchestrator | 2026-04-08 01:17:40 | INFO  | MariaDB Uptime: 22 minutes, 12 seconds 2026-04-08 01:17:40.645903 | orchestrator | 2026-04-08 01:17:40 | INFO  | Threads: 133 connected, 1 running 2026-04-08 01:17:40.645910 | orchestrator | 2026-04-08 01:17:40 | INFO  | Queries: 214184 total, 0 slow 2026-04-08 01:17:40.645955 | orchestrator | 2026-04-08 01:17:40 | INFO  | Aborted Connects: 144 2026-04-08 01:17:40.646303 | orchestrator | 2026-04-08 01:17:40 | INFO  | MariaDB Galera Cluster validation PASSED 2026-04-08 01:17:40.876020 | orchestrator | 2026-04-08 01:17:40.876105 | orchestrator | # Status of Prometheus 2026-04-08 01:17:40.876117 | orchestrator | 2026-04-08 01:17:40.876122 | orchestrator | + echo 2026-04-08 01:17:40.876126 | orchestrator | + echo '# Status of Prometheus' 2026-04-08 01:17:40.876131 | orchestrator | + echo 2026-04-08 01:17:40.876136 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-08 01:17:40.934693 | orchestrator | Unauthorized 2026-04-08 01:17:40.937784 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-08 01:17:41.008851 | orchestrator | Unauthorized 2026-04-08 01:17:41.013087 | orchestrator | 2026-04-08 01:17:41.013162 | orchestrator | # Status of RabbitMQ 2026-04-08 01:17:41.013174 | orchestrator | 2026-04-08 01:17:41.013179 | orchestrator | + echo 2026-04-08 01:17:41.013184 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-08 01:17:41.013188 | orchestrator | + echo 2026-04-08 01:17:41.013822 | orchestrator | ++ semver latest 10.0.0-0 2026-04-08 01:17:41.084396 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-08 01:17:41.084573 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-08 01:17:41.084585 | orchestrator | + osism status messaging 2026-04-08 01:17:48.217700 | orchestrator | 2026-04-08 01:17:48 | ERROR  | Unable to get ansible vault password 2026-04-08 01:17:48.217771 | orchestrator | 2026-04-08 01:17:48 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:17:48.217780 | orchestrator | 2026-04-08 01:17:48 | ERROR  | Dropping encrypted entries 2026-04-08 01:17:48.258539 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-04-08 01:17:48.334471 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-04-08 01:17:48.334553 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-04-08 01:17:48.334562 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-04-08 01:17:48.334569 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-0] Cluster Size: 3 2026-04-08 01:17:48.334583 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-08 01:17:48.334592 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-08 01:17:48.334838 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-04-08 01:17:48.334855 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-0] Connections: 203, Channels: 202, Queues: 173 2026-04-08 01:17:48.335269 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-0] Messages: 231 total, 231 ready, 0 unacked 2026-04-08 01:17:48.335732 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-0] Message Rates: 7.4/s publish, 8.8/s deliver 2026-04-08 01:17:48.335962 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-0] Disk Free: 58.0 GB (limit: 0.0 GB) 2026-04-08 01:17:48.336392 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-0] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-04-08 01:17:48.336459 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-0] File Descriptors: 120/1024 2026-04-08 01:17:48.337065 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-0] Sockets: 74/832 2026-04-08 01:17:48.337083 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-04-08 01:17:48.398774 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-04-08 01:17:48.398865 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-04-08 01:17:48.398877 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-04-08 01:17:48.398884 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-1] Cluster Size: 3 2026-04-08 01:17:48.398902 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-08 01:17:48.399141 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-08 01:17:48.399220 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-04-08 01:17:48.399229 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-1] Connections: 203, Channels: 202, Queues: 173 2026-04-08 01:17:48.399240 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-1] Messages: 231 total, 231 ready, 0 unacked 2026-04-08 01:17:48.399391 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-1] Message Rates: 7.4/s publish, 8.8/s deliver 2026-04-08 01:17:48.400018 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-1] Disk Free: 58.4 GB (limit: 0.0 GB) 2026-04-08 01:17:48.400075 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-1] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-04-08 01:17:48.400084 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-1] File Descriptors: 108/1024 2026-04-08 01:17:48.400371 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-1] Sockets: 61/832 2026-04-08 01:17:48.400542 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-04-08 01:17:48.460698 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-04-08 01:17:48.460788 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-04-08 01:17:48.460798 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-04-08 01:17:48.460805 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-2] Cluster Size: 3 2026-04-08 01:17:48.460813 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-08 01:17:48.460838 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-08 01:17:48.460864 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-04-08 01:17:48.460869 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-2] Connections: 203, Channels: 202, Queues: 173 2026-04-08 01:17:48.460876 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-2] Messages: 231 total, 231 ready, 0 unacked 2026-04-08 01:17:48.460882 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-2] Message Rates: 7.4/s publish, 8.8/s deliver 2026-04-08 01:17:48.460888 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-2] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-04-08 01:17:48.460894 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-2] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-04-08 01:17:48.460900 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-2] File Descriptors: 116/1024 2026-04-08 01:17:48.460906 | orchestrator | 2026-04-08 01:17:48 | INFO  | [testbed-node-2] Sockets: 68/832 2026-04-08 01:17:48.460913 | orchestrator | 2026-04-08 01:17:48 | INFO  | RabbitMQ Cluster validation PASSED 2026-04-08 01:17:48.724209 | orchestrator | 2026-04-08 01:17:48.724300 | orchestrator | # Status of Redis 2026-04-08 01:17:48.724310 | orchestrator | 2026-04-08 01:17:48.724317 | orchestrator | + echo 2026-04-08 01:17:48.724325 | orchestrator | + echo '# Status of Redis' 2026-04-08 01:17:48.724333 | orchestrator | + echo 2026-04-08 01:17:48.724341 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-08 01:17:48.728923 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001739s;;;0.000000;10.000000 2026-04-08 01:17:48.729005 | orchestrator | 2026-04-08 01:17:48.729017 | orchestrator | # Create backup of MariaDB database 2026-04-08 01:17:48.729026 | orchestrator | 2026-04-08 01:17:48.729033 | orchestrator | + popd 2026-04-08 01:17:48.729040 | orchestrator | + echo 2026-04-08 01:17:48.729047 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-08 01:17:48.729053 | orchestrator | + echo 2026-04-08 01:17:48.729060 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-08 01:17:50.066546 | orchestrator | 2026-04-08 01:17:50 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-08 01:17:50.133017 | orchestrator | 2026-04-08 01:17:50 | INFO  | Task 4324ee2e-b62a-4743-9a8f-9d46371eebae (mariadb_backup) was prepared for execution. 2026-04-08 01:17:50.133094 | orchestrator | 2026-04-08 01:17:50 | INFO  | It takes a moment until task 4324ee2e-b62a-4743-9a8f-9d46371eebae (mariadb_backup) has been started and output is visible here. 2026-04-08 01:19:12.600408 | orchestrator | 2026-04-08 01:19:12.600539 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 01:19:12.600552 | orchestrator | 2026-04-08 01:19:12.600558 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 01:19:12.600566 | orchestrator | Wednesday 08 April 2026 01:17:53 +0000 (0:00:00.259) 0:00:00.259 ******* 2026-04-08 01:19:12.600571 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:19:12.600579 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:19:12.600585 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:19:12.600592 | orchestrator | 2026-04-08 01:19:12.600598 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 01:19:12.600605 | orchestrator | Wednesday 08 April 2026 01:17:53 +0000 (0:00:00.303) 0:00:00.563 ******* 2026-04-08 01:19:12.600612 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-08 01:19:12.600619 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-08 01:19:12.600626 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-08 01:19:12.600632 | orchestrator | 2026-04-08 01:19:12.600639 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-08 01:19:12.600669 | orchestrator | 2026-04-08 01:19:12.600676 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-08 01:19:12.600682 | orchestrator | Wednesday 08 April 2026 01:17:53 +0000 (0:00:00.392) 0:00:00.955 ******* 2026-04-08 01:19:12.600686 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-08 01:19:12.600707 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-08 01:19:12.600713 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-08 01:19:12.600725 | orchestrator | 2026-04-08 01:19:12.600731 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-08 01:19:12.600738 | orchestrator | Wednesday 08 April 2026 01:17:54 +0000 (0:00:00.414) 0:00:01.369 ******* 2026-04-08 01:19:12.600745 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 01:19:12.600753 | orchestrator | 2026-04-08 01:19:12.600760 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-08 01:19:12.600764 | orchestrator | Wednesday 08 April 2026 01:17:54 +0000 (0:00:00.651) 0:00:02.021 ******* 2026-04-08 01:19:12.600768 | orchestrator | ok: [testbed-node-0] 2026-04-08 01:19:12.600772 | orchestrator | ok: [testbed-node-1] 2026-04-08 01:19:12.600776 | orchestrator | ok: [testbed-node-2] 2026-04-08 01:19:12.600780 | orchestrator | 2026-04-08 01:19:12.600785 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-08 01:19:12.600789 | orchestrator | Wednesday 08 April 2026 01:17:58 +0000 (0:00:03.182) 0:00:05.204 ******* 2026-04-08 01:19:12.600793 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:19:12.600798 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:19:12.600803 | orchestrator | changed: [testbed-node-0] 2026-04-08 01:19:12.600807 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-08 01:19:12.600810 | orchestrator | 2026-04-08 01:19:12.600814 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-08 01:19:12.600819 | orchestrator | skipping: no hosts matched 2026-04-08 01:19:12.600823 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-08 01:19:12.600827 | orchestrator | 2026-04-08 01:19:12.600831 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-08 01:19:12.600835 | orchestrator | skipping: no hosts matched 2026-04-08 01:19:12.600839 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-08 01:19:12.600843 | orchestrator | mariadb_bootstrap_restart 2026-04-08 01:19:12.600847 | orchestrator | 2026-04-08 01:19:12.600851 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-08 01:19:12.600855 | orchestrator | skipping: no hosts matched 2026-04-08 01:19:12.600858 | orchestrator | 2026-04-08 01:19:12.600862 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-08 01:19:12.600866 | orchestrator | 2026-04-08 01:19:12.600870 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-08 01:19:12.600874 | orchestrator | Wednesday 08 April 2026 01:19:11 +0000 (0:01:13.681) 0:01:18.886 ******* 2026-04-08 01:19:12.600891 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:19:12.600895 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:19:12.600899 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:19:12.600903 | orchestrator | 2026-04-08 01:19:12.600907 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-08 01:19:12.600913 | orchestrator | Wednesday 08 April 2026 01:19:12 +0000 (0:00:00.285) 0:01:19.172 ******* 2026-04-08 01:19:12.600919 | orchestrator | skipping: [testbed-node-0] 2026-04-08 01:19:12.600925 | orchestrator | skipping: [testbed-node-1] 2026-04-08 01:19:12.600932 | orchestrator | skipping: [testbed-node-2] 2026-04-08 01:19:12.600939 | orchestrator | 2026-04-08 01:19:12.600944 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:19:12.600949 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 01:19:12.600960 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-08 01:19:12.600965 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-08 01:19:12.600970 | orchestrator | 2026-04-08 01:19:12.600974 | orchestrator | 2026-04-08 01:19:12.600979 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:19:12.600984 | orchestrator | Wednesday 08 April 2026 01:19:12 +0000 (0:00:00.209) 0:01:19.381 ******* 2026-04-08 01:19:12.600988 | orchestrator | =============================================================================== 2026-04-08 01:19:12.600993 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 73.68s 2026-04-08 01:19:12.601013 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.18s 2026-04-08 01:19:12.601018 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.65s 2026-04-08 01:19:12.601022 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2026-04-08 01:19:12.601027 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2026-04-08 01:19:12.601032 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-04-08 01:19:12.601036 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.29s 2026-04-08 01:19:12.601041 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.21s 2026-04-08 01:19:12.785157 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-08 01:19:12.795860 | orchestrator | + set -e 2026-04-08 01:19:12.795940 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-08 01:19:12.795958 | orchestrator | ++ export INTERACTIVE=false 2026-04-08 01:19:12.795972 | orchestrator | ++ INTERACTIVE=false 2026-04-08 01:19:12.795981 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-08 01:19:12.795990 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-08 01:19:12.796000 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-08 01:19:12.796923 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-08 01:19:12.802559 | orchestrator | 2026-04-08 01:19:12.802664 | orchestrator | # OpenStack endpoints 2026-04-08 01:19:12.802680 | orchestrator | 2026-04-08 01:19:12.802690 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-08 01:19:12.802702 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-08 01:19:12.802713 | orchestrator | + export OS_CLOUD=admin 2026-04-08 01:19:12.802720 | orchestrator | + OS_CLOUD=admin 2026-04-08 01:19:12.802727 | orchestrator | + echo 2026-04-08 01:19:12.802733 | orchestrator | + echo '# OpenStack endpoints' 2026-04-08 01:19:12.802740 | orchestrator | + echo 2026-04-08 01:19:12.802747 | orchestrator | + openstack endpoint list 2026-04-08 01:19:16.226622 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-08 01:19:16.226702 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-08 01:19:16.226711 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-08 01:19:16.226718 | orchestrator | | 0385f1a74729414eb936e53d20e9a79f | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-08 01:19:16.226725 | orchestrator | | 1b7b14cfc49d4b1a893ae2d85a94b597 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-08 01:19:16.226746 | orchestrator | | 28d9ce10c62d452784217c18293638d9 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-08 01:19:16.226754 | orchestrator | | 34ab8a1665934648a97d5de50e55f177 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-08 01:19:16.226793 | orchestrator | | 3761b6f6a12040eb88b834d08e81b9c9 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-08 01:19:16.226802 | orchestrator | | 39c77957bf6746d883a7535ef8c2759d | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-08 01:19:16.226807 | orchestrator | | 437fdb757baf4a88a3031259b577bd31 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-08 01:19:16.226811 | orchestrator | | 46eafff908a7410baec5ad98d9aa7189 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-08 01:19:16.226815 | orchestrator | | 62646b8ac76841b4ab2b25378c107382 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-08 01:19:16.226819 | orchestrator | | 65711703e5d7455da769a00887fa5ba8 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-08 01:19:16.226822 | orchestrator | | 927f34d24819484f9a4041619fa25445 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-08 01:19:16.226826 | orchestrator | | 9b04fd7ce8c242bba594cacc84d95753 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-08 01:19:16.226831 | orchestrator | | 9d69c21fbdc6460084ff590c2b44de00 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-08 01:19:16.226837 | orchestrator | | a92352b5bc0e4d9b9609de696b634cbf | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-08 01:19:16.226842 | orchestrator | | c220bc38772643448026d98f01696d13 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-08 01:19:16.226848 | orchestrator | | c60a819503fd422993338fca8eb9217f | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-08 01:19:16.226854 | orchestrator | | c9c42a5bccc44cfaa19996570ddc0111 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-08 01:19:16.226859 | orchestrator | | cfe647503204483b8925e584bac6d827 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-08 01:19:16.226864 | orchestrator | | cffbef76c88f47f0be75cf5159ee0532 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-08 01:19:16.226871 | orchestrator | | e5532ad560064b958a42b8fc0b8ce5e4 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-08 01:19:16.226896 | orchestrator | | e80da3f519d14ba98a2e5f1171f79010 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-08 01:19:16.226906 | orchestrator | | f392cf266d5a483fb3d48025aecad773 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-08 01:19:16.226919 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-08 01:19:16.465477 | orchestrator | 2026-04-08 01:19:16.465550 | orchestrator | # Cinder 2026-04-08 01:19:16.465576 | orchestrator | 2026-04-08 01:19:16.465581 | orchestrator | + echo 2026-04-08 01:19:16.465586 | orchestrator | + echo '# Cinder' 2026-04-08 01:19:16.465590 | orchestrator | + echo 2026-04-08 01:19:16.465594 | orchestrator | + openstack volume service list 2026-04-08 01:19:19.137651 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-08 01:19:19.137751 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-08 01:19:19.137758 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-08 01:19:19.137763 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-08T01:19:17.000000 | 2026-04-08 01:19:19.137781 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-08T01:19:17.000000 | 2026-04-08 01:19:19.137785 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-08T01:19:18.000000 | 2026-04-08 01:19:19.137790 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-08T01:19:17.000000 | 2026-04-08 01:19:19.137794 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-08T01:19:16.000000 | 2026-04-08 01:19:19.137798 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-08T01:19:16.000000 | 2026-04-08 01:19:19.137802 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-08T01:19:12.000000 | 2026-04-08 01:19:19.137806 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-08T01:19:14.000000 | 2026-04-08 01:19:19.137810 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-08T01:19:15.000000 | 2026-04-08 01:19:19.137813 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-08 01:19:19.388521 | orchestrator | 2026-04-08 01:19:19.388607 | orchestrator | # Neutron 2026-04-08 01:19:19.388618 | orchestrator | 2026-04-08 01:19:19.388626 | orchestrator | + echo 2026-04-08 01:19:19.388634 | orchestrator | + echo '# Neutron' 2026-04-08 01:19:19.388643 | orchestrator | + echo 2026-04-08 01:19:19.388650 | orchestrator | + openstack network agent list 2026-04-08 01:19:22.174206 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-08 01:19:22.174300 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-08 01:19:22.174311 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-08 01:19:22.174317 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-08 01:19:22.174321 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-08 01:19:22.174326 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-08 01:19:22.174330 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-08 01:19:22.174334 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-08 01:19:22.174338 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-08 01:19:22.174342 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-08 01:19:22.174368 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-08 01:19:22.174372 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-08 01:19:22.174376 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-08 01:19:22.430935 | orchestrator | + openstack network service provider list 2026-04-08 01:19:24.955299 | orchestrator | +---------------+------+---------+ 2026-04-08 01:19:24.955445 | orchestrator | | Service Type | Name | Default | 2026-04-08 01:19:24.955458 | orchestrator | +---------------+------+---------+ 2026-04-08 01:19:24.955466 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-08 01:19:24.955471 | orchestrator | +---------------+------+---------+ 2026-04-08 01:19:25.227350 | orchestrator | + echo 2026-04-08 01:19:25.227626 | orchestrator | 2026-04-08 01:19:25.227653 | orchestrator | # Nova 2026-04-08 01:19:25.227663 | orchestrator | 2026-04-08 01:19:25.227672 | orchestrator | + echo '# Nova' 2026-04-08 01:19:25.227680 | orchestrator | + echo 2026-04-08 01:19:25.227688 | orchestrator | + openstack compute service list 2026-04-08 01:19:28.614487 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-08 01:19:28.614561 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-08 01:19:28.614568 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-08 01:19:28.614573 | orchestrator | | 9dfbd031-f9e0-47be-803c-7c5fd56443f4 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-08T01:19:26.000000 | 2026-04-08 01:19:28.614577 | orchestrator | | aace37ca-9d6c-4f1c-815a-dcdc0980683b | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-08T01:19:26.000000 | 2026-04-08 01:19:28.614581 | orchestrator | | 42a15387-f7d0-45be-8c97-b62ea29a6119 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-08T01:19:27.000000 | 2026-04-08 01:19:28.614598 | orchestrator | | d45fe4c9-8da8-4e47-bc7d-363d9f453544 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-08T01:19:24.000000 | 2026-04-08 01:19:28.614602 | orchestrator | | 204a45bf-02df-4f36-b342-1bb14b93277b | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-08T01:19:26.000000 | 2026-04-08 01:19:28.614606 | orchestrator | | d87ae513-db86-4cad-8cc1-50ce620e59ce | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-08T01:19:18.000000 | 2026-04-08 01:19:28.614610 | orchestrator | | dab2a3b7-883b-4738-b61d-8d20dd4d1302 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-08T01:19:24.000000 | 2026-04-08 01:19:28.614614 | orchestrator | | 84a3e7ea-05c9-4b9f-8e0b-a4818a5be7be | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-08T01:19:24.000000 | 2026-04-08 01:19:28.614618 | orchestrator | | 7e5cfacd-d220-4a75-93c9-b9a71b532e03 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-08T01:19:25.000000 | 2026-04-08 01:19:28.614622 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-08 01:19:28.883957 | orchestrator | + openstack hypervisor list 2026-04-08 01:19:31.583767 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-08 01:19:31.583889 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-08 01:19:31.583907 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-08 01:19:31.583921 | orchestrator | | 9cb92cd4-a0d6-43a1-baae-8afe90105ead | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-08 01:19:31.583934 | orchestrator | | 406a04cd-86d1-48f3-bd2d-ad6d3f4ced60 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-08 01:19:31.583946 | orchestrator | | 7f3285b6-8079-41ed-9671-782027db2198 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-08 01:19:31.583990 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-08 01:19:31.831150 | orchestrator | 2026-04-08 01:19:31.831240 | orchestrator | # Run OpenStack test play 2026-04-08 01:19:31.831254 | orchestrator | 2026-04-08 01:19:31.831263 | orchestrator | + echo 2026-04-08 01:19:31.831273 | orchestrator | + echo '# Run OpenStack test play' 2026-04-08 01:19:31.831281 | orchestrator | + echo 2026-04-08 01:19:31.831290 | orchestrator | + osism apply --environment openstack test 2026-04-08 01:19:33.107091 | orchestrator | 2026-04-08 01:19:33 | INFO  | Trying to run play test in environment openstack 2026-04-08 01:19:33.137176 | orchestrator | 2026-04-08 01:19:33 | INFO  | Prepare task for execution of test. 2026-04-08 01:19:33.208237 | orchestrator | 2026-04-08 01:19:33 | INFO  | Task 21f7e39f-4626-45b0-951f-85e51e6b7593 (test) was prepared for execution. 2026-04-08 01:19:33.208321 | orchestrator | 2026-04-08 01:19:33 | INFO  | It takes a moment until task 21f7e39f-4626-45b0-951f-85e51e6b7593 (test) has been started and output is visible here. 2026-04-08 01:22:50.781563 | orchestrator | 2026-04-08 01:22:50.781712 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-08 01:22:50.781720 | orchestrator | 2026-04-08 01:22:50.781725 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-08 01:22:50.781730 | orchestrator | Wednesday 08 April 2026 01:19:36 +0000 (0:00:00.104) 0:00:00.104 ******* 2026-04-08 01:22:50.781734 | orchestrator | changed: [localhost] 2026-04-08 01:22:50.781740 | orchestrator | 2026-04-08 01:22:50.781744 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-08 01:22:50.781748 | orchestrator | Wednesday 08 April 2026 01:19:40 +0000 (0:00:03.803) 0:00:03.908 ******* 2026-04-08 01:22:50.781752 | orchestrator | changed: [localhost] 2026-04-08 01:22:50.781756 | orchestrator | 2026-04-08 01:22:50.781760 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-08 01:22:50.781764 | orchestrator | Wednesday 08 April 2026 01:19:44 +0000 (0:00:04.290) 0:00:08.198 ******* 2026-04-08 01:22:50.781768 | orchestrator | changed: [localhost] 2026-04-08 01:22:50.781772 | orchestrator | 2026-04-08 01:22:50.781775 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-08 01:22:50.781780 | orchestrator | Wednesday 08 April 2026 01:19:50 +0000 (0:00:06.556) 0:00:14.755 ******* 2026-04-08 01:22:50.781783 | orchestrator | changed: [localhost] 2026-04-08 01:22:50.781787 | orchestrator | 2026-04-08 01:22:50.781791 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-08 01:22:50.781795 | orchestrator | Wednesday 08 April 2026 01:19:55 +0000 (0:00:04.399) 0:00:19.154 ******* 2026-04-08 01:22:50.781799 | orchestrator | changed: [localhost] 2026-04-08 01:22:50.781803 | orchestrator | 2026-04-08 01:22:50.781807 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-08 01:22:50.781811 | orchestrator | Wednesday 08 April 2026 01:19:59 +0000 (0:00:04.440) 0:00:23.594 ******* 2026-04-08 01:22:50.781815 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-08 01:22:50.781820 | orchestrator | changed: [localhost] => (item=member) 2026-04-08 01:22:50.781824 | orchestrator | changed: [localhost] => (item=creator) 2026-04-08 01:22:50.781828 | orchestrator | 2026-04-08 01:22:50.781832 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-08 01:22:50.781836 | orchestrator | Wednesday 08 April 2026 01:20:11 +0000 (0:00:12.069) 0:00:35.664 ******* 2026-04-08 01:22:50.781840 | orchestrator | changed: [localhost] 2026-04-08 01:22:50.781844 | orchestrator | 2026-04-08 01:22:50.781848 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-08 01:22:50.781852 | orchestrator | Wednesday 08 April 2026 01:20:16 +0000 (0:00:04.553) 0:00:40.217 ******* 2026-04-08 01:22:50.781856 | orchestrator | changed: [localhost] 2026-04-08 01:22:50.781860 | orchestrator | 2026-04-08 01:22:50.781864 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-08 01:22:50.781884 | orchestrator | Wednesday 08 April 2026 01:20:21 +0000 (0:00:04.991) 0:00:45.208 ******* 2026-04-08 01:22:50.781888 | orchestrator | changed: [localhost] 2026-04-08 01:22:50.781892 | orchestrator | 2026-04-08 01:22:50.781896 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-08 01:22:50.781900 | orchestrator | Wednesday 08 April 2026 01:20:26 +0000 (0:00:05.045) 0:00:50.254 ******* 2026-04-08 01:22:50.781904 | orchestrator | changed: [localhost] 2026-04-08 01:22:50.781908 | orchestrator | 2026-04-08 01:22:50.781911 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-08 01:22:50.781915 | orchestrator | Wednesday 08 April 2026 01:20:30 +0000 (0:00:03.996) 0:00:54.250 ******* 2026-04-08 01:22:50.781919 | orchestrator | changed: [localhost] 2026-04-08 01:22:50.781923 | orchestrator | 2026-04-08 01:22:50.781927 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-08 01:22:50.781931 | orchestrator | Wednesday 08 April 2026 01:20:34 +0000 (0:00:04.074) 0:00:58.325 ******* 2026-04-08 01:22:50.781934 | orchestrator | changed: [localhost] 2026-04-08 01:22:50.781938 | orchestrator | 2026-04-08 01:22:50.781942 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-08 01:22:50.781946 | orchestrator | Wednesday 08 April 2026 01:20:38 +0000 (0:00:04.038) 0:01:02.364 ******* 2026-04-08 01:22:50.781950 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-08 01:22:50.781954 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-08 01:22:50.781958 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-08 01:22:50.781961 | orchestrator | 2026-04-08 01:22:50.781965 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-08 01:22:50.781969 | orchestrator | Wednesday 08 April 2026 01:20:52 +0000 (0:00:13.792) 0:01:16.157 ******* 2026-04-08 01:22:50.781973 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-08 01:22:50.781977 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-08 01:22:50.781981 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-08 01:22:50.781985 | orchestrator | 2026-04-08 01:22:50.781989 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-08 01:22:50.781993 | orchestrator | Wednesday 08 April 2026 01:21:09 +0000 (0:00:16.839) 0:01:32.996 ******* 2026-04-08 01:22:50.781997 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-08 01:22:50.782001 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-08 01:22:50.782005 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-08 01:22:50.782008 | orchestrator | 2026-04-08 01:22:50.782047 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-08 01:22:50.782051 | orchestrator | 2026-04-08 01:22:50.782055 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-08 01:22:50.782071 | orchestrator | Wednesday 08 April 2026 01:21:43 +0000 (0:00:33.925) 0:02:06.921 ******* 2026-04-08 01:22:50.782075 | orchestrator | ok: [localhost] 2026-04-08 01:22:50.782079 | orchestrator | 2026-04-08 01:22:50.782083 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-08 01:22:50.782087 | orchestrator | Wednesday 08 April 2026 01:21:46 +0000 (0:00:03.581) 0:02:10.503 ******* 2026-04-08 01:22:50.782103 | orchestrator | skipping: [localhost] 2026-04-08 01:22:50.782107 | orchestrator | 2026-04-08 01:22:50.782111 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-08 01:22:50.782115 | orchestrator | Wednesday 08 April 2026 01:21:46 +0000 (0:00:00.054) 0:02:10.558 ******* 2026-04-08 01:22:50.782119 | orchestrator | skipping: [localhost] 2026-04-08 01:22:50.782123 | orchestrator | 2026-04-08 01:22:50.782128 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-08 01:22:50.782138 | orchestrator | Wednesday 08 April 2026 01:21:46 +0000 (0:00:00.058) 0:02:10.616 ******* 2026-04-08 01:22:50.782143 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-08 01:22:50.782148 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-08 01:22:50.782153 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-08 01:22:50.782158 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-08 01:22:50.782162 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-08 01:22:50.782167 | orchestrator | skipping: [localhost] 2026-04-08 01:22:50.782172 | orchestrator | 2026-04-08 01:22:50.782177 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-08 01:22:50.782181 | orchestrator | Wednesday 08 April 2026 01:21:47 +0000 (0:00:00.158) 0:02:10.775 ******* 2026-04-08 01:22:50.782186 | orchestrator | skipping: [localhost] 2026-04-08 01:22:50.782190 | orchestrator | 2026-04-08 01:22:50.782195 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-08 01:22:50.782199 | orchestrator | Wednesday 08 April 2026 01:21:47 +0000 (0:00:00.154) 0:02:10.930 ******* 2026-04-08 01:22:50.782204 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-08 01:22:50.782208 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-08 01:22:50.782213 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-08 01:22:50.782217 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-08 01:22:50.782225 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-08 01:22:50.782230 | orchestrator | 2026-04-08 01:22:50.782234 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-08 01:22:50.782239 | orchestrator | Wednesday 08 April 2026 01:21:51 +0000 (0:00:04.700) 0:02:15.630 ******* 2026-04-08 01:22:50.782243 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-08 01:22:50.782249 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-08 01:22:50.782254 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-08 01:22:50.782259 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-08 01:22:50.782263 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-04-08 01:22:50.782269 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j903398659190.2789', 'results_file': '/ansible/.ansible_async/j903398659190.2789', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-08 01:22:50.782276 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j451958357586.2814', 'results_file': '/ansible/.ansible_async/j451958357586.2814', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-08 01:22:50.782280 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j922801608298.2839', 'results_file': '/ansible/.ansible_async/j922801608298.2839', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-08 01:22:50.782285 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j764146003192.2864', 'results_file': '/ansible/.ansible_async/j764146003192.2864', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-08 01:22:50.782290 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j209402585683.2889', 'results_file': '/ansible/.ansible_async/j209402585683.2889', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-08 01:22:50.782298 | orchestrator | 2026-04-08 01:22:50.782303 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-08 01:22:50.782308 | orchestrator | Wednesday 08 April 2026 01:22:49 +0000 (0:00:57.829) 0:03:13.460 ******* 2026-04-08 01:22:50.782315 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-08 01:24:04.126359 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-08 01:24:04.126443 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-08 01:24:04.126453 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-08 01:24:04.126460 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-08 01:24:04.126467 | orchestrator | 2026-04-08 01:24:04.126475 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-08 01:24:04.126481 | orchestrator | Wednesday 08 April 2026 01:22:54 +0000 (0:00:04.861) 0:03:18.322 ******* 2026-04-08 01:24:04.126487 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-08 01:24:04.126497 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j268801795830.3000', 'results_file': '/ansible/.ansible_async/j268801795830.3000', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-08 01:24:04.126507 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j35910528023.3025', 'results_file': '/ansible/.ansible_async/j35910528023.3025', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-08 01:24:04.126515 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j600953861787.3050', 'results_file': '/ansible/.ansible_async/j600953861787.3050', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-08 01:24:04.126520 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j849673294461.3075', 'results_file': '/ansible/.ansible_async/j849673294461.3075', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-08 01:24:04.126537 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j361432712133.3100', 'results_file': '/ansible/.ansible_async/j361432712133.3100', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-08 01:24:04.126541 | orchestrator | 2026-04-08 01:24:04.126548 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-08 01:24:04.126554 | orchestrator | Wednesday 08 April 2026 01:23:04 +0000 (0:00:09.817) 0:03:28.139 ******* 2026-04-08 01:24:04.126559 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-08 01:24:04.126569 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-08 01:24:04.126577 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-08 01:24:04.126583 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-08 01:24:04.126589 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-08 01:24:04.126595 | orchestrator | 2026-04-08 01:24:04.126602 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-08 01:24:04.126608 | orchestrator | Wednesday 08 April 2026 01:23:08 +0000 (0:00:04.458) 0:03:32.597 ******* 2026-04-08 01:24:04.126688 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-08 01:24:04.126721 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j573191549435.3169', 'results_file': '/ansible/.ansible_async/j573191549435.3169', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-08 01:24:04.126728 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j95901471850.3194', 'results_file': '/ansible/.ansible_async/j95901471850.3194', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-08 01:24:04.126735 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j61347191663.3220', 'results_file': '/ansible/.ansible_async/j61347191663.3220', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-08 01:24:04.126741 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j788236688533.3246', 'results_file': '/ansible/.ansible_async/j788236688533.3246', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-08 01:24:04.126762 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j544497361010.3272', 'results_file': '/ansible/.ansible_async/j544497361010.3272', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-08 01:24:04.126767 | orchestrator | 2026-04-08 01:24:04.126771 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-08 01:24:04.126775 | orchestrator | Wednesday 08 April 2026 01:23:18 +0000 (0:00:09.795) 0:03:42.393 ******* 2026-04-08 01:24:04.126779 | orchestrator | changed: [localhost] 2026-04-08 01:24:04.126784 | orchestrator | 2026-04-08 01:24:04.126788 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-08 01:24:04.126792 | orchestrator | Wednesday 08 April 2026 01:23:25 +0000 (0:00:06.617) 0:03:49.011 ******* 2026-04-08 01:24:04.126796 | orchestrator | changed: [localhost] 2026-04-08 01:24:04.126800 | orchestrator | 2026-04-08 01:24:04.126804 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-08 01:24:04.126808 | orchestrator | Wednesday 08 April 2026 01:23:39 +0000 (0:00:13.929) 0:04:02.941 ******* 2026-04-08 01:24:04.126813 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-08 01:24:04.126817 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-08 01:24:04.126820 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-08 01:24:04.126824 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-08 01:24:04.126828 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-08 01:24:04.126832 | orchestrator | 2026-04-08 01:24:04.126836 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-08 01:24:04.126840 | orchestrator | Wednesday 08 April 2026 01:24:03 +0000 (0:00:24.583) 0:04:27.524 ******* 2026-04-08 01:24:04.126844 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-08 01:24:04.126848 | orchestrator |  "msg": "test: 192.168.112.142" 2026-04-08 01:24:04.126852 | orchestrator | } 2026-04-08 01:24:04.126857 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-08 01:24:04.126861 | orchestrator |  "msg": "test-1: 192.168.112.191" 2026-04-08 01:24:04.126865 | orchestrator | } 2026-04-08 01:24:04.126869 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-08 01:24:04.126873 | orchestrator |  "msg": "test-2: 192.168.112.182" 2026-04-08 01:24:04.126877 | orchestrator | } 2026-04-08 01:24:04.126881 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-08 01:24:04.126884 | orchestrator |  "msg": "test-3: 192.168.112.103" 2026-04-08 01:24:04.126888 | orchestrator | } 2026-04-08 01:24:04.126892 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-08 01:24:04.126902 | orchestrator |  "msg": "test-4: 192.168.112.168" 2026-04-08 01:24:04.126910 | orchestrator | } 2026-04-08 01:24:04.126915 | orchestrator | 2026-04-08 01:24:04.126920 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:24:04.126925 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-08 01:24:04.126931 | orchestrator | 2026-04-08 01:24:04.126936 | orchestrator | 2026-04-08 01:24:04.126940 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:24:04.126945 | orchestrator | Wednesday 08 April 2026 01:24:03 +0000 (0:00:00.147) 0:04:27.672 ******* 2026-04-08 01:24:04.126950 | orchestrator | =============================================================================== 2026-04-08 01:24:04.126954 | orchestrator | Wait for instance creation to complete --------------------------------- 57.83s 2026-04-08 01:24:04.126959 | orchestrator | Create test routers ---------------------------------------------------- 33.93s 2026-04-08 01:24:04.126963 | orchestrator | Create floating ip addresses ------------------------------------------- 24.58s 2026-04-08 01:24:04.126968 | orchestrator | Create test subnets ---------------------------------------------------- 16.84s 2026-04-08 01:24:04.126972 | orchestrator | Attach test volume ----------------------------------------------------- 13.93s 2026-04-08 01:24:04.126977 | orchestrator | Create test networks --------------------------------------------------- 13.79s 2026-04-08 01:24:04.126982 | orchestrator | Add member roles to user test ------------------------------------------ 12.07s 2026-04-08 01:24:04.126987 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.82s 2026-04-08 01:24:04.126991 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.80s 2026-04-08 01:24:04.126996 | orchestrator | Create test volume ------------------------------------------------------ 6.62s 2026-04-08 01:24:04.127000 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.56s 2026-04-08 01:24:04.127005 | orchestrator | Add rule to ssh security group ------------------------------------------ 5.05s 2026-04-08 01:24:04.127009 | orchestrator | Create ssh security group ----------------------------------------------- 4.99s 2026-04-08 01:24:04.127014 | orchestrator | Add metadata to instances ----------------------------------------------- 4.86s 2026-04-08 01:24:04.127019 | orchestrator | Create test instances --------------------------------------------------- 4.70s 2026-04-08 01:24:04.127023 | orchestrator | Create test server group ------------------------------------------------ 4.55s 2026-04-08 01:24:04.127028 | orchestrator | Add tag to instances ---------------------------------------------------- 4.46s 2026-04-08 01:24:04.127032 | orchestrator | Create test user -------------------------------------------------------- 4.44s 2026-04-08 01:24:04.127037 | orchestrator | Create test project ----------------------------------------------------- 4.40s 2026-04-08 01:24:04.127041 | orchestrator | Create test-admin user -------------------------------------------------- 4.29s 2026-04-08 01:24:04.331169 | orchestrator | + server_list 2026-04-08 01:24:04.331254 | orchestrator | + openstack --os-cloud test server list 2026-04-08 01:24:08.147944 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-08 01:24:08.148008 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-08 01:24:08.148018 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-08 01:24:08.148025 | orchestrator | | 38eab6d1-3924-435d-97d6-9471ea65c757 | test-4 | ACTIVE | test-3=192.168.112.168, 192.168.202.199 | N/A (booted from volume) | SCS-1L-1 | 2026-04-08 01:24:08.148033 | orchestrator | | 10eba327-f3ab-4804-929c-4481eb28ac05 | test-3 | ACTIVE | test-2=192.168.112.103, 192.168.201.143 | N/A (booted from volume) | SCS-1L-1 | 2026-04-08 01:24:08.148037 | orchestrator | | b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 | test-1 | ACTIVE | test-1=192.168.112.191, 192.168.200.200 | N/A (booted from volume) | SCS-1L-1 | 2026-04-08 01:24:08.148055 | orchestrator | | 0fcefdab-da1f-4354-9676-0e7a1a012b1e | test | ACTIVE | test-1=192.168.112.142, 192.168.200.175 | N/A (booted from volume) | SCS-1L-1 | 2026-04-08 01:24:08.148059 | orchestrator | | aa5b037b-51d0-4bb7-a12d-285404dd660c | test-2 | ACTIVE | test-2=192.168.112.182, 192.168.201.45 | N/A (booted from volume) | SCS-1L-1 | 2026-04-08 01:24:08.148063 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-08 01:24:08.479048 | orchestrator | + openstack --os-cloud test server show test 2026-04-08 01:24:11.426381 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-08 01:24:11.426449 | orchestrator | | Field | Value | 2026-04-08 01:24:11.426457 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-08 01:24:11.426464 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-08 01:24:11.426469 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-08 01:24:11.426475 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-08 01:24:11.426481 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-08 01:24:11.426487 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-08 01:24:11.426502 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-08 01:24:11.426516 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-08 01:24:11.426523 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-08 01:24:11.426528 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-08 01:24:11.426534 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-08 01:24:11.426539 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-08 01:24:11.426545 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-08 01:24:11.426550 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-08 01:24:11.426556 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-08 01:24:11.426569 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-08 01:24:11.426574 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-08T01:22:25.000000 | 2026-04-08 01:24:11.426584 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-08 01:24:11.426589 | orchestrator | | accessIPv4 | | 2026-04-08 01:24:11.426597 | orchestrator | | accessIPv6 | | 2026-04-08 01:24:11.426603 | orchestrator | | addresses | test-1=192.168.112.142, 192.168.200.175 | 2026-04-08 01:24:11.426608 | orchestrator | | config_drive | | 2026-04-08 01:24:11.426614 | orchestrator | | created | 2026-04-08T01:21:57Z | 2026-04-08 01:24:11.426650 | orchestrator | | description | None | 2026-04-08 01:24:11.426656 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-08 01:24:11.426665 | orchestrator | | hostId | 2a364465c374e0fe1260b3e23ee70a07aaf341fbbba5afe187771707 | 2026-04-08 01:24:11.426671 | orchestrator | | host_status | None | 2026-04-08 01:24:11.426685 | orchestrator | | id | 0fcefdab-da1f-4354-9676-0e7a1a012b1e | 2026-04-08 01:24:11.426695 | orchestrator | | image | N/A (booted from volume) | 2026-04-08 01:24:11.426708 | orchestrator | | key_name | test | 2026-04-08 01:24:11.426717 | orchestrator | | locked | False | 2026-04-08 01:24:11.426725 | orchestrator | | locked_reason | None | 2026-04-08 01:24:11.426734 | orchestrator | | name | test | 2026-04-08 01:24:11.426744 | orchestrator | | pinned_availability_zone | None | 2026-04-08 01:24:11.426759 | orchestrator | | progress | 0 | 2026-04-08 01:24:11.426769 | orchestrator | | project_id | db60e5f3fcd8405f919329861c745ddf | 2026-04-08 01:24:11.426778 | orchestrator | | properties | hostname='test' | 2026-04-08 01:24:11.426793 | orchestrator | | security_groups | name='ssh' | 2026-04-08 01:24:11.426801 | orchestrator | | | name='icmp' | 2026-04-08 01:24:11.426812 | orchestrator | | server_groups | None | 2026-04-08 01:24:11.426820 | orchestrator | | status | ACTIVE | 2026-04-08 01:24:11.426834 | orchestrator | | tags | test | 2026-04-08 01:24:11.426843 | orchestrator | | trusted_image_certificates | None | 2026-04-08 01:24:11.426864 | orchestrator | | updated | 2026-04-08T01:22:55Z | 2026-04-08 01:24:11.426873 | orchestrator | | user_id | 3d8afb78ae4b4b71a639962e246dbb18 | 2026-04-08 01:24:11.426881 | orchestrator | | volumes_attached | delete_on_termination='True', id='1861cbd3-23f2-4907-84c9-8574c5d1fb12' | 2026-04-08 01:24:11.426889 | orchestrator | | | delete_on_termination='False', id='88651457-3499-40e1-b274-e84ba22352f7' | 2026-04-08 01:24:11.432299 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-08 01:24:11.755377 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-08 01:24:14.950579 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-08 01:24:14.950672 | orchestrator | | Field | Value | 2026-04-08 01:24:14.950684 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-08 01:24:14.950692 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-08 01:24:14.950715 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-08 01:24:14.950722 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-08 01:24:14.950728 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-08 01:24:14.950734 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-08 01:24:14.950741 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-08 01:24:14.950761 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-08 01:24:14.950773 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-08 01:24:14.950780 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-08 01:24:14.950784 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-08 01:24:14.950788 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-08 01:24:14.950796 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-08 01:24:14.950800 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-08 01:24:14.950804 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-08 01:24:14.950808 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-08 01:24:14.950812 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-08T01:22:25.000000 | 2026-04-08 01:24:14.950820 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-08 01:24:14.950825 | orchestrator | | accessIPv4 | | 2026-04-08 01:24:14.950829 | orchestrator | | accessIPv6 | | 2026-04-08 01:24:14.950833 | orchestrator | | addresses | test-1=192.168.112.191, 192.168.200.200 | 2026-04-08 01:24:14.950844 | orchestrator | | config_drive | | 2026-04-08 01:24:14.950850 | orchestrator | | created | 2026-04-08T01:21:58Z | 2026-04-08 01:24:14.950857 | orchestrator | | description | None | 2026-04-08 01:24:14.950863 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-08 01:24:14.950869 | orchestrator | | hostId | e5425934a5e868122650f97265647582960fc8183d85dcaa79061178 | 2026-04-08 01:24:14.950879 | orchestrator | | host_status | None | 2026-04-08 01:24:14.950891 | orchestrator | | id | b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 | 2026-04-08 01:24:14.950901 | orchestrator | | image | N/A (booted from volume) | 2026-04-08 01:24:14.950908 | orchestrator | | key_name | test | 2026-04-08 01:24:14.950919 | orchestrator | | locked | False | 2026-04-08 01:24:14.950926 | orchestrator | | locked_reason | None | 2026-04-08 01:24:14.950932 | orchestrator | | name | test-1 | 2026-04-08 01:24:14.950938 | orchestrator | | pinned_availability_zone | None | 2026-04-08 01:24:14.950945 | orchestrator | | progress | 0 | 2026-04-08 01:24:14.950951 | orchestrator | | project_id | db60e5f3fcd8405f919329861c745ddf | 2026-04-08 01:24:14.950958 | orchestrator | | properties | hostname='test-1' | 2026-04-08 01:24:14.950971 | orchestrator | | security_groups | name='ssh' | 2026-04-08 01:24:14.950986 | orchestrator | | | name='icmp' | 2026-04-08 01:24:14.951024 | orchestrator | | server_groups | None | 2026-04-08 01:24:14.951031 | orchestrator | | status | ACTIVE | 2026-04-08 01:24:14.951038 | orchestrator | | tags | test | 2026-04-08 01:24:14.951045 | orchestrator | | trusted_image_certificates | None | 2026-04-08 01:24:14.951051 | orchestrator | | updated | 2026-04-08T01:22:56Z | 2026-04-08 01:24:14.951058 | orchestrator | | user_id | 3d8afb78ae4b4b71a639962e246dbb18 | 2026-04-08 01:24:14.951064 | orchestrator | | volumes_attached | delete_on_termination='True', id='a19afd98-46d1-4208-90cc-3a897c6e31e7' | 2026-04-08 01:24:14.954221 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-08 01:24:15.225797 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-08 01:24:18.305786 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-08 01:24:18.305928 | orchestrator | | Field | Value | 2026-04-08 01:24:18.305950 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-08 01:24:18.305957 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-08 01:24:18.305964 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-08 01:24:18.305970 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-08 01:24:18.305976 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-08 01:24:18.305982 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-08 01:24:18.305988 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-08 01:24:18.306010 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-08 01:24:18.306045 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-08 01:24:18.306066 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-08 01:24:18.306073 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-08 01:24:18.306080 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-08 01:24:18.306087 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-08 01:24:18.306095 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-08 01:24:18.306102 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-08 01:24:18.306108 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-08 01:24:18.306115 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-08T01:22:27.000000 | 2026-04-08 01:24:18.306129 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-08 01:24:18.306140 | orchestrator | | accessIPv4 | | 2026-04-08 01:24:18.306153 | orchestrator | | accessIPv6 | | 2026-04-08 01:24:18.306163 | orchestrator | | addresses | test-2=192.168.112.182, 192.168.201.45 | 2026-04-08 01:24:18.306169 | orchestrator | | config_drive | | 2026-04-08 01:24:18.306175 | orchestrator | | created | 2026-04-08T01:21:57Z | 2026-04-08 01:24:18.306182 | orchestrator | | description | None | 2026-04-08 01:24:18.306188 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-08 01:24:18.306194 | orchestrator | | hostId | 2a364465c374e0fe1260b3e23ee70a07aaf341fbbba5afe187771707 | 2026-04-08 01:24:18.306200 | orchestrator | | host_status | None | 2026-04-08 01:24:18.306229 | orchestrator | | id | aa5b037b-51d0-4bb7-a12d-285404dd660c | 2026-04-08 01:24:18.306237 | orchestrator | | image | N/A (booted from volume) | 2026-04-08 01:24:18.306244 | orchestrator | | key_name | test | 2026-04-08 01:24:18.306254 | orchestrator | | locked | False | 2026-04-08 01:24:18.306261 | orchestrator | | locked_reason | None | 2026-04-08 01:24:18.306268 | orchestrator | | name | test-2 | 2026-04-08 01:24:18.306274 | orchestrator | | pinned_availability_zone | None | 2026-04-08 01:24:18.306280 | orchestrator | | progress | 0 | 2026-04-08 01:24:18.306294 | orchestrator | | project_id | db60e5f3fcd8405f919329861c745ddf | 2026-04-08 01:24:18.306307 | orchestrator | | properties | hostname='test-2' | 2026-04-08 01:24:18.306320 | orchestrator | | security_groups | name='ssh' | 2026-04-08 01:24:18.306330 | orchestrator | | | name='icmp' | 2026-04-08 01:24:18.306339 | orchestrator | | server_groups | None | 2026-04-08 01:24:18.306348 | orchestrator | | status | ACTIVE | 2026-04-08 01:24:18.306355 | orchestrator | | tags | test | 2026-04-08 01:24:18.306361 | orchestrator | | trusted_image_certificates | None | 2026-04-08 01:24:18.306369 | orchestrator | | updated | 2026-04-08T01:22:56Z | 2026-04-08 01:24:18.306376 | orchestrator | | user_id | 3d8afb78ae4b4b71a639962e246dbb18 | 2026-04-08 01:24:18.306382 | orchestrator | | volumes_attached | delete_on_termination='True', id='d26efcec-56f4-4fda-a4a7-efc69438a628' | 2026-04-08 01:24:18.309885 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-08 01:24:18.576313 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-08 01:24:21.506943 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-08 01:24:21.507058 | orchestrator | | Field | Value | 2026-04-08 01:24:21.507071 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-08 01:24:21.507076 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-08 01:24:21.507081 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-08 01:24:21.507085 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-08 01:24:21.507089 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-08 01:24:21.507094 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-08 01:24:21.507115 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-08 01:24:21.507133 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-08 01:24:21.507140 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-08 01:24:21.507151 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-08 01:24:21.507160 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-08 01:24:21.507168 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-08 01:24:21.507174 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-08 01:24:21.507181 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-08 01:24:21.507188 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-08 01:24:21.507199 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-08 01:24:21.507206 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-08T01:22:25.000000 | 2026-04-08 01:24:21.507217 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-08 01:24:21.507223 | orchestrator | | accessIPv4 | | 2026-04-08 01:24:21.507234 | orchestrator | | accessIPv6 | | 2026-04-08 01:24:21.507241 | orchestrator | | addresses | test-2=192.168.112.103, 192.168.201.143 | 2026-04-08 01:24:21.507246 | orchestrator | | config_drive | | 2026-04-08 01:24:21.507253 | orchestrator | | created | 2026-04-08T01:21:58Z | 2026-04-08 01:24:21.507259 | orchestrator | | description | None | 2026-04-08 01:24:21.507270 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-08 01:24:21.507276 | orchestrator | | hostId | e5425934a5e868122650f97265647582960fc8183d85dcaa79061178 | 2026-04-08 01:24:21.507283 | orchestrator | | host_status | None | 2026-04-08 01:24:21.507294 | orchestrator | | id | 10eba327-f3ab-4804-929c-4481eb28ac05 | 2026-04-08 01:24:21.507300 | orchestrator | | image | N/A (booted from volume) | 2026-04-08 01:24:21.507308 | orchestrator | | key_name | test | 2026-04-08 01:24:21.507312 | orchestrator | | locked | False | 2026-04-08 01:24:21.507316 | orchestrator | | locked_reason | None | 2026-04-08 01:24:21.507320 | orchestrator | | name | test-3 | 2026-04-08 01:24:21.507328 | orchestrator | | pinned_availability_zone | None | 2026-04-08 01:24:21.507332 | orchestrator | | progress | 0 | 2026-04-08 01:24:21.507336 | orchestrator | | project_id | db60e5f3fcd8405f919329861c745ddf | 2026-04-08 01:24:21.507340 | orchestrator | | properties | hostname='test-3' | 2026-04-08 01:24:21.507349 | orchestrator | | security_groups | name='ssh' | 2026-04-08 01:24:21.507353 | orchestrator | | | name='icmp' | 2026-04-08 01:24:21.507360 | orchestrator | | server_groups | None | 2026-04-08 01:24:21.507364 | orchestrator | | status | ACTIVE | 2026-04-08 01:24:21.507368 | orchestrator | | tags | test | 2026-04-08 01:24:21.507372 | orchestrator | | trusted_image_certificates | None | 2026-04-08 01:24:21.507380 | orchestrator | | updated | 2026-04-08T01:22:57Z | 2026-04-08 01:24:21.507384 | orchestrator | | user_id | 3d8afb78ae4b4b71a639962e246dbb18 | 2026-04-08 01:24:21.507388 | orchestrator | | volumes_attached | delete_on_termination='True', id='54558e3e-e1a1-4c21-bf5d-49dd7c0eea35' | 2026-04-08 01:24:21.512944 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-08 01:24:21.828778 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-08 01:24:24.978401 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-08 01:24:24.978500 | orchestrator | | Field | Value | 2026-04-08 01:24:24.978510 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-08 01:24:24.978515 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-08 01:24:24.978519 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-08 01:24:24.978540 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-08 01:24:24.978545 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-08 01:24:24.978549 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-08 01:24:24.978553 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-08 01:24:24.978570 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-08 01:24:24.978906 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-08 01:24:24.978922 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-08 01:24:24.978928 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-08 01:24:24.978933 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-08 01:24:24.978944 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-08 01:24:24.978948 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-08 01:24:24.978952 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-08 01:24:24.978956 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-08 01:24:24.978961 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-08T01:22:26.000000 | 2026-04-08 01:24:24.978977 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-08 01:24:24.978981 | orchestrator | | accessIPv4 | | 2026-04-08 01:24:24.978986 | orchestrator | | accessIPv6 | | 2026-04-08 01:24:24.978990 | orchestrator | | addresses | test-3=192.168.112.168, 192.168.202.199 | 2026-04-08 01:24:24.978997 | orchestrator | | config_drive | | 2026-04-08 01:24:24.979001 | orchestrator | | created | 2026-04-08T01:21:59Z | 2026-04-08 01:24:24.979005 | orchestrator | | description | None | 2026-04-08 01:24:24.979009 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-08 01:24:24.979014 | orchestrator | | hostId | 2a364465c374e0fe1260b3e23ee70a07aaf341fbbba5afe187771707 | 2026-04-08 01:24:24.979021 | orchestrator | | host_status | None | 2026-04-08 01:24:24.979029 | orchestrator | | id | 38eab6d1-3924-435d-97d6-9471ea65c757 | 2026-04-08 01:24:24.979034 | orchestrator | | image | N/A (booted from volume) | 2026-04-08 01:24:24.979038 | orchestrator | | key_name | test | 2026-04-08 01:24:24.979045 | orchestrator | | locked | False | 2026-04-08 01:24:24.979049 | orchestrator | | locked_reason | None | 2026-04-08 01:24:24.979053 | orchestrator | | name | test-4 | 2026-04-08 01:24:24.979058 | orchestrator | | pinned_availability_zone | None | 2026-04-08 01:24:24.979062 | orchestrator | | progress | 0 | 2026-04-08 01:24:24.979066 | orchestrator | | project_id | db60e5f3fcd8405f919329861c745ddf | 2026-04-08 01:24:24.979073 | orchestrator | | properties | hostname='test-4' | 2026-04-08 01:24:24.979081 | orchestrator | | security_groups | name='ssh' | 2026-04-08 01:24:24.979086 | orchestrator | | | name='icmp' | 2026-04-08 01:24:24.979090 | orchestrator | | server_groups | None | 2026-04-08 01:24:24.979097 | orchestrator | | status | ACTIVE | 2026-04-08 01:24:24.979101 | orchestrator | | tags | test | 2026-04-08 01:24:24.979105 | orchestrator | | trusted_image_certificates | None | 2026-04-08 01:24:24.979117 | orchestrator | | updated | 2026-04-08T01:22:58Z | 2026-04-08 01:24:24.979122 | orchestrator | | user_id | 3d8afb78ae4b4b71a639962e246dbb18 | 2026-04-08 01:24:24.979132 | orchestrator | | volumes_attached | delete_on_termination='True', id='6a53eeaf-4458-44a9-ab1c-ed9be56f9977' | 2026-04-08 01:24:24.982690 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-08 01:24:25.245475 | orchestrator | + server_ping 2026-04-08 01:24:25.246762 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-08 01:24:25.246812 | orchestrator | ++ tr -d '\r' 2026-04-08 01:24:28.274734 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:24:28.274824 | orchestrator | + ping -c3 192.168.112.168 2026-04-08 01:24:28.288515 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2026-04-08 01:24:28.288591 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=7.71 ms 2026-04-08 01:24:29.284769 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.52 ms 2026-04-08 01:24:30.286165 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=1.82 ms 2026-04-08 01:24:30.286279 | orchestrator | 2026-04-08 01:24:30.286292 | orchestrator | --- 192.168.112.168 ping statistics --- 2026-04-08 01:24:30.286301 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:24:30.286308 | orchestrator | rtt min/avg/max/mdev = 1.821/4.017/7.710/2.626 ms 2026-04-08 01:24:30.287651 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:24:30.287713 | orchestrator | + ping -c3 192.168.112.142 2026-04-08 01:24:30.305258 | orchestrator | PING 192.168.112.142 (192.168.112.142) 56(84) bytes of data. 2026-04-08 01:24:30.305335 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=1 ttl=63 time=13.2 ms 2026-04-08 01:24:31.295910 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=2 ttl=63 time=2.17 ms 2026-04-08 01:24:32.297729 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=3 ttl=63 time=1.72 ms 2026-04-08 01:24:32.297805 | orchestrator | 2026-04-08 01:24:32.297812 | orchestrator | --- 192.168.112.142 ping statistics --- 2026-04-08 01:24:32.297818 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:24:32.297823 | orchestrator | rtt min/avg/max/mdev = 1.719/5.705/13.230/5.324 ms 2026-04-08 01:24:32.297828 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:24:32.297833 | orchestrator | + ping -c3 192.168.112.103 2026-04-08 01:24:32.311678 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-04-08 01:24:32.311766 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=8.96 ms 2026-04-08 01:24:33.306250 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.38 ms 2026-04-08 01:24:34.308254 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=2.17 ms 2026-04-08 01:24:34.308323 | orchestrator | 2026-04-08 01:24:34.308330 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-04-08 01:24:34.308337 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:24:34.308341 | orchestrator | rtt min/avg/max/mdev = 2.166/4.503/8.964/3.155 ms 2026-04-08 01:24:34.308346 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:24:34.308351 | orchestrator | + ping -c3 192.168.112.182 2026-04-08 01:24:34.318829 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-04-08 01:24:34.318899 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=5.96 ms 2026-04-08 01:24:35.316502 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.23 ms 2026-04-08 01:24:36.317705 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.82 ms 2026-04-08 01:24:36.317781 | orchestrator | 2026-04-08 01:24:36.317788 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-04-08 01:24:36.317795 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:24:36.317800 | orchestrator | rtt min/avg/max/mdev = 1.820/3.335/5.956/1.860 ms 2026-04-08 01:24:36.318354 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:24:36.318412 | orchestrator | + ping -c3 192.168.112.191 2026-04-08 01:24:36.329201 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2026-04-08 01:24:36.329277 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=6.97 ms 2026-04-08 01:24:37.325532 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.02 ms 2026-04-08 01:24:38.326741 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=1.65 ms 2026-04-08 01:24:38.326817 | orchestrator | 2026-04-08 01:24:38.326825 | orchestrator | --- 192.168.112.191 ping statistics --- 2026-04-08 01:24:38.326831 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:24:38.326836 | orchestrator | rtt min/avg/max/mdev = 1.651/3.547/6.968/2.423 ms 2026-04-08 01:24:38.327137 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-08 01:24:38.327204 | orchestrator | + compute_list 2026-04-08 01:24:38.327215 | orchestrator | + osism manage compute list testbed-node-3 2026-04-08 01:24:39.711009 | orchestrator | 2026-04-08 01:24:39 | ERROR  | Unable to get ansible vault password 2026-04-08 01:24:39.711130 | orchestrator | 2026-04-08 01:24:39 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:24:39.711145 | orchestrator | 2026-04-08 01:24:39 | ERROR  | Dropping encrypted entries 2026-04-08 01:24:43.393917 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-08 01:24:43.394009 | orchestrator | | ID | Name | Status | 2026-04-08 01:24:43.394080 | orchestrator | |--------------------------------------+--------+----------| 2026-04-08 01:24:43.394086 | orchestrator | | 10eba327-f3ab-4804-929c-4481eb28ac05 | test-3 | ACTIVE | 2026-04-08 01:24:43.394090 | orchestrator | | b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 | test-1 | ACTIVE | 2026-04-08 01:24:43.394094 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-08 01:24:43.727866 | orchestrator | + osism manage compute list testbed-node-4 2026-04-08 01:24:45.433489 | orchestrator | 2026-04-08 01:24:45 | ERROR  | Unable to get ansible vault password 2026-04-08 01:24:45.433574 | orchestrator | 2026-04-08 01:24:45 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:24:45.433587 | orchestrator | 2026-04-08 01:24:45 | ERROR  | Dropping encrypted entries 2026-04-08 01:24:47.138730 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-08 01:24:47.138823 | orchestrator | | ID | Name | Status | 2026-04-08 01:24:47.138833 | orchestrator | |--------------------------------------+--------+----------| 2026-04-08 01:24:47.138840 | orchestrator | | 38eab6d1-3924-435d-97d6-9471ea65c757 | test-4 | ACTIVE | 2026-04-08 01:24:47.138846 | orchestrator | | 0fcefdab-da1f-4354-9676-0e7a1a012b1e | test | ACTIVE | 2026-04-08 01:24:47.138853 | orchestrator | | aa5b037b-51d0-4bb7-a12d-285404dd660c | test-2 | ACTIVE | 2026-04-08 01:24:47.138860 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-08 01:24:47.476129 | orchestrator | + osism manage compute list testbed-node-5 2026-04-08 01:24:49.147476 | orchestrator | 2026-04-08 01:24:49 | ERROR  | Unable to get ansible vault password 2026-04-08 01:24:49.147559 | orchestrator | 2026-04-08 01:24:49 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:24:49.147575 | orchestrator | 2026-04-08 01:24:49 | ERROR  | Dropping encrypted entries 2026-04-08 01:24:50.838590 | orchestrator | +------+--------+----------+ 2026-04-08 01:24:50.838744 | orchestrator | | ID | Name | Status | 2026-04-08 01:24:50.838757 | orchestrator | |------+--------+----------| 2026-04-08 01:24:50.838765 | orchestrator | +------+--------+----------+ 2026-04-08 01:24:51.322416 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-04-08 01:24:52.848542 | orchestrator | 2026-04-08 01:24:52 | ERROR  | Unable to get ansible vault password 2026-04-08 01:24:52.848626 | orchestrator | 2026-04-08 01:24:52 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:24:52.848657 | orchestrator | 2026-04-08 01:24:52 | ERROR  | Dropping encrypted entries 2026-04-08 01:24:54.359950 | orchestrator | 2026-04-08 01:24:54 | INFO  | Live migrating server 38eab6d1-3924-435d-97d6-9471ea65c757 2026-04-08 01:25:07.359713 | orchestrator | 2026-04-08 01:25:07 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:25:09.785685 | orchestrator | 2026-04-08 01:25:09 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:25:12.465196 | orchestrator | 2026-04-08 01:25:12 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:25:14.910881 | orchestrator | 2026-04-08 01:25:14 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:25:17.265593 | orchestrator | 2026-04-08 01:25:17 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:25:19.453926 | orchestrator | 2026-04-08 01:25:19 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:25:21.662075 | orchestrator | 2026-04-08 01:25:21 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:25:23.885054 | orchestrator | 2026-04-08 01:25:23 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:25:26.336743 | orchestrator | 2026-04-08 01:25:26 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:25:28.736152 | orchestrator | 2026-04-08 01:25:28 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:25:31.161854 | orchestrator | 2026-04-08 01:25:31 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) completed with status ACTIVE 2026-04-08 01:25:31.161947 | orchestrator | 2026-04-08 01:25:31 | INFO  | Live migrating server 0fcefdab-da1f-4354-9676-0e7a1a012b1e 2026-04-08 01:25:43.503091 | orchestrator | 2026-04-08 01:25:43 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:25:45.840274 | orchestrator | 2026-04-08 01:25:45 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:25:48.183510 | orchestrator | 2026-04-08 01:25:48 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:25:50.543224 | orchestrator | 2026-04-08 01:25:50 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:25:52.935072 | orchestrator | 2026-04-08 01:25:52 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:25:55.112787 | orchestrator | 2026-04-08 01:25:55 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:25:57.351410 | orchestrator | 2026-04-08 01:25:57 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:25:59.639621 | orchestrator | 2026-04-08 01:25:59 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:26:01.947421 | orchestrator | 2026-04-08 01:26:01 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:26:04.249644 | orchestrator | 2026-04-08 01:26:04 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:26:06.601476 | orchestrator | 2026-04-08 01:26:06 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) completed with status ACTIVE 2026-04-08 01:26:06.601549 | orchestrator | 2026-04-08 01:26:06 | INFO  | Live migrating server aa5b037b-51d0-4bb7-a12d-285404dd660c 2026-04-08 01:26:19.121318 | orchestrator | 2026-04-08 01:26:19 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:26:21.400475 | orchestrator | 2026-04-08 01:26:21 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:26:23.677426 | orchestrator | 2026-04-08 01:26:23 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:26:25.924951 | orchestrator | 2026-04-08 01:26:25 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:26:28.211364 | orchestrator | 2026-04-08 01:26:28 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:26:30.424837 | orchestrator | 2026-04-08 01:26:30 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:26:32.636046 | orchestrator | 2026-04-08 01:26:32 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:26:34.904325 | orchestrator | 2026-04-08 01:26:34 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:26:37.326825 | orchestrator | 2026-04-08 01:26:37 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) completed with status ACTIVE 2026-04-08 01:26:37.678579 | orchestrator | + compute_list 2026-04-08 01:26:37.678708 | orchestrator | + osism manage compute list testbed-node-3 2026-04-08 01:26:39.296520 | orchestrator | 2026-04-08 01:26:39 | ERROR  | Unable to get ansible vault password 2026-04-08 01:26:39.296608 | orchestrator | 2026-04-08 01:26:39 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:26:39.296618 | orchestrator | 2026-04-08 01:26:39 | ERROR  | Dropping encrypted entries 2026-04-08 01:26:40.802359 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-08 01:26:40.802430 | orchestrator | | ID | Name | Status | 2026-04-08 01:26:40.802444 | orchestrator | |--------------------------------------+--------+----------| 2026-04-08 01:26:40.802449 | orchestrator | | 38eab6d1-3924-435d-97d6-9471ea65c757 | test-4 | ACTIVE | 2026-04-08 01:26:40.802453 | orchestrator | | 10eba327-f3ab-4804-929c-4481eb28ac05 | test-3 | ACTIVE | 2026-04-08 01:26:40.802458 | orchestrator | | b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 | test-1 | ACTIVE | 2026-04-08 01:26:40.802467 | orchestrator | | 0fcefdab-da1f-4354-9676-0e7a1a012b1e | test | ACTIVE | 2026-04-08 01:26:40.802473 | orchestrator | | aa5b037b-51d0-4bb7-a12d-285404dd660c | test-2 | ACTIVE | 2026-04-08 01:26:40.802479 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-08 01:26:41.109766 | orchestrator | + osism manage compute list testbed-node-4 2026-04-08 01:26:42.750458 | orchestrator | 2026-04-08 01:26:42 | ERROR  | Unable to get ansible vault password 2026-04-08 01:26:42.750551 | orchestrator | 2026-04-08 01:26:42 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:26:42.750568 | orchestrator | 2026-04-08 01:26:42 | ERROR  | Dropping encrypted entries 2026-04-08 01:26:43.924797 | orchestrator | +------+--------+----------+ 2026-04-08 01:26:43.924909 | orchestrator | | ID | Name | Status | 2026-04-08 01:26:43.924921 | orchestrator | |------+--------+----------| 2026-04-08 01:26:43.924929 | orchestrator | +------+--------+----------+ 2026-04-08 01:26:44.292764 | orchestrator | + osism manage compute list testbed-node-5 2026-04-08 01:26:45.925628 | orchestrator | 2026-04-08 01:26:45 | ERROR  | Unable to get ansible vault password 2026-04-08 01:26:45.925738 | orchestrator | 2026-04-08 01:26:45 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:26:45.925763 | orchestrator | 2026-04-08 01:26:45 | ERROR  | Dropping encrypted entries 2026-04-08 01:26:47.272179 | orchestrator | +------+--------+----------+ 2026-04-08 01:26:47.272269 | orchestrator | | ID | Name | Status | 2026-04-08 01:26:47.272280 | orchestrator | |------+--------+----------| 2026-04-08 01:26:47.272287 | orchestrator | +------+--------+----------+ 2026-04-08 01:26:47.607468 | orchestrator | + server_ping 2026-04-08 01:26:47.608846 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-08 01:26:47.609084 | orchestrator | ++ tr -d '\r' 2026-04-08 01:26:50.455686 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:26:50.455781 | orchestrator | + ping -c3 192.168.112.168 2026-04-08 01:26:50.467626 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2026-04-08 01:26:50.467795 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=8.56 ms 2026-04-08 01:26:51.463159 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.20 ms 2026-04-08 01:26:52.464730 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=1.74 ms 2026-04-08 01:26:52.464801 | orchestrator | 2026-04-08 01:26:52.464808 | orchestrator | --- 192.168.112.168 ping statistics --- 2026-04-08 01:26:52.464850 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:26:52.464856 | orchestrator | rtt min/avg/max/mdev = 1.737/4.166/8.561/3.113 ms 2026-04-08 01:26:52.464861 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:26:52.464866 | orchestrator | + ping -c3 192.168.112.142 2026-04-08 01:26:52.478805 | orchestrator | PING 192.168.112.142 (192.168.112.142) 56(84) bytes of data. 2026-04-08 01:26:52.478877 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=1 ttl=63 time=8.46 ms 2026-04-08 01:26:53.474237 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=2 ttl=63 time=2.00 ms 2026-04-08 01:26:54.475618 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=3 ttl=63 time=1.96 ms 2026-04-08 01:26:54.475727 | orchestrator | 2026-04-08 01:26:54.475736 | orchestrator | --- 192.168.112.142 ping statistics --- 2026-04-08 01:26:54.475742 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:26:54.475746 | orchestrator | rtt min/avg/max/mdev = 1.956/4.138/8.462/3.057 ms 2026-04-08 01:26:54.477627 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:26:54.477728 | orchestrator | + ping -c3 192.168.112.103 2026-04-08 01:26:54.490652 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-04-08 01:26:54.490788 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=7.86 ms 2026-04-08 01:26:55.485726 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=1.93 ms 2026-04-08 01:26:56.486634 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.64 ms 2026-04-08 01:26:56.486748 | orchestrator | 2026-04-08 01:26:56.486757 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-04-08 01:26:56.486764 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-08 01:26:56.486769 | orchestrator | rtt min/avg/max/mdev = 1.638/3.809/7.859/2.866 ms 2026-04-08 01:26:56.486774 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:26:56.486779 | orchestrator | + ping -c3 192.168.112.182 2026-04-08 01:26:56.498439 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-04-08 01:26:56.498508 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=7.08 ms 2026-04-08 01:26:57.495162 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.15 ms 2026-04-08 01:26:58.497261 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=2.10 ms 2026-04-08 01:26:58.497355 | orchestrator | 2026-04-08 01:26:58.497367 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-04-08 01:26:58.497375 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:26:58.497382 | orchestrator | rtt min/avg/max/mdev = 2.100/3.777/7.078/2.334 ms 2026-04-08 01:26:58.497389 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:26:58.497396 | orchestrator | + ping -c3 192.168.112.191 2026-04-08 01:26:58.508346 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2026-04-08 01:26:58.508445 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=6.27 ms 2026-04-08 01:26:59.505725 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=1.93 ms 2026-04-08 01:27:00.506196 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=1.60 ms 2026-04-08 01:27:00.506293 | orchestrator | 2026-04-08 01:27:00.506305 | orchestrator | --- 192.168.112.191 ping statistics --- 2026-04-08 01:27:00.506313 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-08 01:27:00.506321 | orchestrator | rtt min/avg/max/mdev = 1.598/3.265/6.267/2.127 ms 2026-04-08 01:27:00.506906 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-04-08 01:27:02.012281 | orchestrator | 2026-04-08 01:27:02 | ERROR  | Unable to get ansible vault password 2026-04-08 01:27:02.012384 | orchestrator | 2026-04-08 01:27:02 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:27:02.012397 | orchestrator | 2026-04-08 01:27:02 | ERROR  | Dropping encrypted entries 2026-04-08 01:27:03.141911 | orchestrator | 2026-04-08 01:27:03 | INFO  | No migratable instances found on node testbed-node-5 2026-04-08 01:27:03.463534 | orchestrator | + compute_list 2026-04-08 01:27:03.463624 | orchestrator | + osism manage compute list testbed-node-3 2026-04-08 01:27:05.066905 | orchestrator | 2026-04-08 01:27:05 | ERROR  | Unable to get ansible vault password 2026-04-08 01:27:05.067003 | orchestrator | 2026-04-08 01:27:05 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:27:05.067017 | orchestrator | 2026-04-08 01:27:05 | ERROR  | Dropping encrypted entries 2026-04-08 01:27:06.583369 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-08 01:27:06.583436 | orchestrator | | ID | Name | Status | 2026-04-08 01:27:06.583447 | orchestrator | |--------------------------------------+--------+----------| 2026-04-08 01:27:06.583456 | orchestrator | | 38eab6d1-3924-435d-97d6-9471ea65c757 | test-4 | ACTIVE | 2026-04-08 01:27:06.583464 | orchestrator | | 10eba327-f3ab-4804-929c-4481eb28ac05 | test-3 | ACTIVE | 2026-04-08 01:27:06.583473 | orchestrator | | b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 | test-1 | ACTIVE | 2026-04-08 01:27:06.583481 | orchestrator | | 0fcefdab-da1f-4354-9676-0e7a1a012b1e | test | ACTIVE | 2026-04-08 01:27:06.583488 | orchestrator | | aa5b037b-51d0-4bb7-a12d-285404dd660c | test-2 | ACTIVE | 2026-04-08 01:27:06.583496 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-08 01:27:06.910713 | orchestrator | + osism manage compute list testbed-node-4 2026-04-08 01:27:08.540609 | orchestrator | 2026-04-08 01:27:08 | ERROR  | Unable to get ansible vault password 2026-04-08 01:27:08.540947 | orchestrator | 2026-04-08 01:27:08 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:27:08.541028 | orchestrator | 2026-04-08 01:27:08 | ERROR  | Dropping encrypted entries 2026-04-08 01:27:09.694953 | orchestrator | +------+--------+----------+ 2026-04-08 01:27:09.695048 | orchestrator | | ID | Name | Status | 2026-04-08 01:27:09.695056 | orchestrator | |------+--------+----------| 2026-04-08 01:27:09.695060 | orchestrator | +------+--------+----------+ 2026-04-08 01:27:10.007755 | orchestrator | + osism manage compute list testbed-node-5 2026-04-08 01:27:11.635093 | orchestrator | 2026-04-08 01:27:11 | ERROR  | Unable to get ansible vault password 2026-04-08 01:27:11.635224 | orchestrator | 2026-04-08 01:27:11 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:27:11.635252 | orchestrator | 2026-04-08 01:27:11 | ERROR  | Dropping encrypted entries 2026-04-08 01:27:12.790559 | orchestrator | +------+--------+----------+ 2026-04-08 01:27:12.790699 | orchestrator | | ID | Name | Status | 2026-04-08 01:27:12.790710 | orchestrator | |------+--------+----------| 2026-04-08 01:27:12.790714 | orchestrator | +------+--------+----------+ 2026-04-08 01:27:13.175491 | orchestrator | + server_ping 2026-04-08 01:27:13.176422 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-08 01:27:13.176614 | orchestrator | ++ tr -d '\r' 2026-04-08 01:27:16.033384 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:27:16.033467 | orchestrator | + ping -c3 192.168.112.168 2026-04-08 01:27:16.045770 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2026-04-08 01:27:16.045895 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=9.62 ms 2026-04-08 01:27:17.040071 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.28 ms 2026-04-08 01:27:18.042172 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=2.20 ms 2026-04-08 01:27:18.042266 | orchestrator | 2026-04-08 01:27:18.042278 | orchestrator | --- 192.168.112.168 ping statistics --- 2026-04-08 01:27:18.042286 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:27:18.042292 | orchestrator | rtt min/avg/max/mdev = 2.199/4.698/9.621/3.480 ms 2026-04-08 01:27:18.043147 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:27:18.043198 | orchestrator | + ping -c3 192.168.112.142 2026-04-08 01:27:18.058252 | orchestrator | PING 192.168.112.142 (192.168.112.142) 56(84) bytes of data. 2026-04-08 01:27:18.058365 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=1 ttl=63 time=10.4 ms 2026-04-08 01:27:19.050896 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=2 ttl=63 time=2.30 ms 2026-04-08 01:27:20.052654 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=3 ttl=63 time=2.14 ms 2026-04-08 01:27:20.052827 | orchestrator | 2026-04-08 01:27:20.052838 | orchestrator | --- 192.168.112.142 ping statistics --- 2026-04-08 01:27:20.052848 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:27:20.052855 | orchestrator | rtt min/avg/max/mdev = 2.144/4.956/10.427/3.869 ms 2026-04-08 01:27:20.053326 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:27:20.053384 | orchestrator | + ping -c3 192.168.112.103 2026-04-08 01:27:20.067093 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-04-08 01:27:20.067185 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=9.05 ms 2026-04-08 01:27:21.061456 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.00 ms 2026-04-08 01:27:22.062597 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.60 ms 2026-04-08 01:27:22.062730 | orchestrator | 2026-04-08 01:27:22.062744 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-04-08 01:27:22.062752 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:27:22.062759 | orchestrator | rtt min/avg/max/mdev = 1.602/4.216/9.045/3.418 ms 2026-04-08 01:27:22.063055 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:27:22.063074 | orchestrator | + ping -c3 192.168.112.182 2026-04-08 01:27:22.078714 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-04-08 01:27:22.078787 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=10.8 ms 2026-04-08 01:27:23.071592 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.14 ms 2026-04-08 01:27:24.073601 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.84 ms 2026-04-08 01:27:24.073720 | orchestrator | 2026-04-08 01:27:24.073731 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-04-08 01:27:24.073737 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:27:24.073741 | orchestrator | rtt min/avg/max/mdev = 1.840/4.914/10.767/4.140 ms 2026-04-08 01:27:24.073747 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:27:24.073752 | orchestrator | + ping -c3 192.168.112.191 2026-04-08 01:27:24.082311 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2026-04-08 01:27:24.082381 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=4.74 ms 2026-04-08 01:27:25.081812 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.10 ms 2026-04-08 01:27:26.082800 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=1.39 ms 2026-04-08 01:27:26.082967 | orchestrator | 2026-04-08 01:27:26.082984 | orchestrator | --- 192.168.112.191 ping statistics --- 2026-04-08 01:27:26.082993 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:27:26.083000 | orchestrator | rtt min/avg/max/mdev = 1.385/2.744/4.743/1.443 ms 2026-04-08 01:27:26.083016 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-04-08 01:27:27.751107 | orchestrator | 2026-04-08 01:27:27 | ERROR  | Unable to get ansible vault password 2026-04-08 01:27:27.751183 | orchestrator | 2026-04-08 01:27:27 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:27:27.751192 | orchestrator | 2026-04-08 01:27:27 | ERROR  | Dropping encrypted entries 2026-04-08 01:27:29.536171 | orchestrator | 2026-04-08 01:27:29 | INFO  | Live migrating server 38eab6d1-3924-435d-97d6-9471ea65c757 2026-04-08 01:27:40.002559 | orchestrator | 2026-04-08 01:27:40 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:27:42.346215 | orchestrator | 2026-04-08 01:27:42 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:27:44.849833 | orchestrator | 2026-04-08 01:27:44 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:27:47.446903 | orchestrator | 2026-04-08 01:27:47 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:27:49.699938 | orchestrator | 2026-04-08 01:27:49 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:27:51.920183 | orchestrator | 2026-04-08 01:27:51 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:27:54.141513 | orchestrator | 2026-04-08 01:27:54 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:27:56.491989 | orchestrator | 2026-04-08 01:27:56 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:27:58.797904 | orchestrator | 2026-04-08 01:27:58 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) completed with status ACTIVE 2026-04-08 01:27:58.797978 | orchestrator | 2026-04-08 01:27:58 | INFO  | Live migrating server 10eba327-f3ab-4804-929c-4481eb28ac05 2026-04-08 01:28:09.546008 | orchestrator | 2026-04-08 01:28:09 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:28:11.887810 | orchestrator | 2026-04-08 01:28:11 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:28:14.151792 | orchestrator | 2026-04-08 01:28:14 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:28:16.433940 | orchestrator | 2026-04-08 01:28:16 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:28:18.756949 | orchestrator | 2026-04-08 01:28:18 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:28:21.003475 | orchestrator | 2026-04-08 01:28:21 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:28:23.623434 | orchestrator | 2026-04-08 01:28:23 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:28:25.900695 | orchestrator | 2026-04-08 01:28:25 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:28:28.204112 | orchestrator | 2026-04-08 01:28:28 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) completed with status ACTIVE 2026-04-08 01:28:28.204203 | orchestrator | 2026-04-08 01:28:28 | INFO  | Live migrating server b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 2026-04-08 01:28:39.960098 | orchestrator | 2026-04-08 01:28:39 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:28:42.219354 | orchestrator | 2026-04-08 01:28:42 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:28:44.555469 | orchestrator | 2026-04-08 01:28:44 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:28:46.914575 | orchestrator | 2026-04-08 01:28:46 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:28:49.198554 | orchestrator | 2026-04-08 01:28:49 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:28:51.496082 | orchestrator | 2026-04-08 01:28:51 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:28:53.814401 | orchestrator | 2026-04-08 01:28:53 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:28:56.134970 | orchestrator | 2026-04-08 01:28:56 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:28:58.408572 | orchestrator | 2026-04-08 01:28:58 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) completed with status ACTIVE 2026-04-08 01:28:58.408741 | orchestrator | 2026-04-08 01:28:58 | INFO  | Live migrating server 0fcefdab-da1f-4354-9676-0e7a1a012b1e 2026-04-08 01:29:08.519862 | orchestrator | 2026-04-08 01:29:08 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:29:10.869394 | orchestrator | 2026-04-08 01:29:10 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:29:13.274848 | orchestrator | 2026-04-08 01:29:13 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:29:15.543699 | orchestrator | 2026-04-08 01:29:15 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:29:17.926819 | orchestrator | 2026-04-08 01:29:17 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:29:20.283977 | orchestrator | 2026-04-08 01:29:20 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:29:22.596033 | orchestrator | 2026-04-08 01:29:22 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:29:24.872262 | orchestrator | 2026-04-08 01:29:24 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:29:27.244374 | orchestrator | 2026-04-08 01:29:27 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:29:29.549009 | orchestrator | 2026-04-08 01:29:29 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:29:31.855605 | orchestrator | 2026-04-08 01:29:31 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) completed with status ACTIVE 2026-04-08 01:29:31.855700 | orchestrator | 2026-04-08 01:29:31 | INFO  | Live migrating server aa5b037b-51d0-4bb7-a12d-285404dd660c 2026-04-08 01:29:41.173353 | orchestrator | 2026-04-08 01:29:41 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:29:43.506989 | orchestrator | 2026-04-08 01:29:43 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:29:45.862229 | orchestrator | 2026-04-08 01:29:45 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:29:48.232155 | orchestrator | 2026-04-08 01:29:48 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:29:50.632949 | orchestrator | 2026-04-08 01:29:50 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:29:53.050171 | orchestrator | 2026-04-08 01:29:53 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:29:55.264079 | orchestrator | 2026-04-08 01:29:55 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:29:57.545602 | orchestrator | 2026-04-08 01:29:57 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:29:59.805857 | orchestrator | 2026-04-08 01:29:59 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) completed with status ACTIVE 2026-04-08 01:30:00.148080 | orchestrator | + compute_list 2026-04-08 01:30:00.148154 | orchestrator | + osism manage compute list testbed-node-3 2026-04-08 01:30:01.780503 | orchestrator | 2026-04-08 01:30:01 | ERROR  | Unable to get ansible vault password 2026-04-08 01:30:01.780574 | orchestrator | 2026-04-08 01:30:01 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:30:01.780582 | orchestrator | 2026-04-08 01:30:01 | ERROR  | Dropping encrypted entries 2026-04-08 01:30:02.969578 | orchestrator | +------+--------+----------+ 2026-04-08 01:30:02.969824 | orchestrator | | ID | Name | Status | 2026-04-08 01:30:02.969846 | orchestrator | |------+--------+----------| 2026-04-08 01:30:02.969856 | orchestrator | +------+--------+----------+ 2026-04-08 01:30:03.327004 | orchestrator | + osism manage compute list testbed-node-4 2026-04-08 01:30:04.926886 | orchestrator | 2026-04-08 01:30:04 | ERROR  | Unable to get ansible vault password 2026-04-08 01:30:04.926999 | orchestrator | 2026-04-08 01:30:04 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:30:04.927013 | orchestrator | 2026-04-08 01:30:04 | ERROR  | Dropping encrypted entries 2026-04-08 01:30:06.533132 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-08 01:30:06.533224 | orchestrator | | ID | Name | Status | 2026-04-08 01:30:06.533235 | orchestrator | |--------------------------------------+--------+----------| 2026-04-08 01:30:06.533242 | orchestrator | | 38eab6d1-3924-435d-97d6-9471ea65c757 | test-4 | ACTIVE | 2026-04-08 01:30:06.533249 | orchestrator | | 10eba327-f3ab-4804-929c-4481eb28ac05 | test-3 | ACTIVE | 2026-04-08 01:30:06.533256 | orchestrator | | b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 | test-1 | ACTIVE | 2026-04-08 01:30:06.533262 | orchestrator | | 0fcefdab-da1f-4354-9676-0e7a1a012b1e | test | ACTIVE | 2026-04-08 01:30:06.533267 | orchestrator | | aa5b037b-51d0-4bb7-a12d-285404dd660c | test-2 | ACTIVE | 2026-04-08 01:30:06.533273 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-08 01:30:06.849764 | orchestrator | + osism manage compute list testbed-node-5 2026-04-08 01:30:08.465811 | orchestrator | 2026-04-08 01:30:08 | ERROR  | Unable to get ansible vault password 2026-04-08 01:30:08.465867 | orchestrator | 2026-04-08 01:30:08 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:30:08.465876 | orchestrator | 2026-04-08 01:30:08 | ERROR  | Dropping encrypted entries 2026-04-08 01:30:09.519392 | orchestrator | +------+--------+----------+ 2026-04-08 01:30:09.519457 | orchestrator | | ID | Name | Status | 2026-04-08 01:30:09.519466 | orchestrator | |------+--------+----------| 2026-04-08 01:30:09.519473 | orchestrator | +------+--------+----------+ 2026-04-08 01:30:09.837075 | orchestrator | + server_ping 2026-04-08 01:30:09.839048 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-08 01:30:09.839121 | orchestrator | ++ tr -d '\r' 2026-04-08 01:30:12.834250 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:30:12.834352 | orchestrator | + ping -c3 192.168.112.168 2026-04-08 01:30:12.846442 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2026-04-08 01:30:12.846528 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=8.54 ms 2026-04-08 01:30:13.842700 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.69 ms 2026-04-08 01:30:14.843911 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=1.65 ms 2026-04-08 01:30:14.844003 | orchestrator | 2026-04-08 01:30:14.844014 | orchestrator | --- 192.168.112.168 ping statistics --- 2026-04-08 01:30:14.844023 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:30:14.844030 | orchestrator | rtt min/avg/max/mdev = 1.647/4.292/8.538/3.032 ms 2026-04-08 01:30:14.844097 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:30:14.844107 | orchestrator | + ping -c3 192.168.112.142 2026-04-08 01:30:14.859530 | orchestrator | PING 192.168.112.142 (192.168.112.142) 56(84) bytes of data. 2026-04-08 01:30:14.859599 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=1 ttl=63 time=9.99 ms 2026-04-08 01:30:15.853436 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=2 ttl=63 time=2.63 ms 2026-04-08 01:30:16.855047 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=3 ttl=63 time=2.15 ms 2026-04-08 01:30:16.855144 | orchestrator | 2026-04-08 01:30:16.855156 | orchestrator | --- 192.168.112.142 ping statistics --- 2026-04-08 01:30:16.855165 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:30:16.855172 | orchestrator | rtt min/avg/max/mdev = 2.152/4.925/9.992/3.588 ms 2026-04-08 01:30:16.855308 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:30:16.855320 | orchestrator | + ping -c3 192.168.112.103 2026-04-08 01:30:16.867362 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-04-08 01:30:16.867462 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=6.94 ms 2026-04-08 01:30:17.863329 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.34 ms 2026-04-08 01:30:18.865984 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.76 ms 2026-04-08 01:30:18.866112 | orchestrator | 2026-04-08 01:30:18.866122 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-04-08 01:30:18.866128 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-08 01:30:18.866133 | orchestrator | rtt min/avg/max/mdev = 1.758/3.677/6.936/2.316 ms 2026-04-08 01:30:18.866139 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:30:18.866144 | orchestrator | + ping -c3 192.168.112.182 2026-04-08 01:30:18.877412 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-04-08 01:30:18.877505 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=7.34 ms 2026-04-08 01:30:19.873417 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.34 ms 2026-04-08 01:30:20.875393 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.82 ms 2026-04-08 01:30:20.875481 | orchestrator | 2026-04-08 01:30:20.875493 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-04-08 01:30:20.875501 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:30:20.875509 | orchestrator | rtt min/avg/max/mdev = 1.816/3.832/7.339/2.488 ms 2026-04-08 01:30:20.875517 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:30:20.875525 | orchestrator | + ping -c3 192.168.112.191 2026-04-08 01:30:20.886218 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2026-04-08 01:30:20.886313 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=5.82 ms 2026-04-08 01:30:21.885026 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.15 ms 2026-04-08 01:30:22.885035 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=1.95 ms 2026-04-08 01:30:22.885979 | orchestrator | 2026-04-08 01:30:22.886079 | orchestrator | --- 192.168.112.191 ping statistics --- 2026-04-08 01:30:22.886091 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-08 01:30:22.886099 | orchestrator | rtt min/avg/max/mdev = 1.953/3.307/5.821/1.779 ms 2026-04-08 01:30:22.886118 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-04-08 01:30:24.553330 | orchestrator | 2026-04-08 01:30:24 | ERROR  | Unable to get ansible vault password 2026-04-08 01:30:24.553405 | orchestrator | 2026-04-08 01:30:24 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:30:24.553413 | orchestrator | 2026-04-08 01:30:24 | ERROR  | Dropping encrypted entries 2026-04-08 01:30:26.192638 | orchestrator | 2026-04-08 01:30:26 | INFO  | Live migrating server 38eab6d1-3924-435d-97d6-9471ea65c757 2026-04-08 01:30:37.480669 | orchestrator | 2026-04-08 01:30:37 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:30:39.810085 | orchestrator | 2026-04-08 01:30:39 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:30:42.207107 | orchestrator | 2026-04-08 01:30:42 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:30:44.516792 | orchestrator | 2026-04-08 01:30:44 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:30:46.868498 | orchestrator | 2026-04-08 01:30:46 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:30:49.196181 | orchestrator | 2026-04-08 01:30:49 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:30:51.552150 | orchestrator | 2026-04-08 01:30:51 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:30:53.931714 | orchestrator | 2026-04-08 01:30:53 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:30:56.229967 | orchestrator | 2026-04-08 01:30:56 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:30:58.430655 | orchestrator | 2026-04-08 01:30:58 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:31:00.710694 | orchestrator | 2026-04-08 01:31:00 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) is still in progress 2026-04-08 01:31:03.070107 | orchestrator | 2026-04-08 01:31:03 | INFO  | Live migration of 38eab6d1-3924-435d-97d6-9471ea65c757 (test-4) completed with status ACTIVE 2026-04-08 01:31:03.070200 | orchestrator | 2026-04-08 01:31:03 | INFO  | Live migrating server 10eba327-f3ab-4804-929c-4481eb28ac05 2026-04-08 01:31:12.863403 | orchestrator | 2026-04-08 01:31:12 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:31:15.212736 | orchestrator | 2026-04-08 01:31:15 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:31:17.551554 | orchestrator | 2026-04-08 01:31:17 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:31:19.961176 | orchestrator | 2026-04-08 01:31:19 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:31:22.189266 | orchestrator | 2026-04-08 01:31:22 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:31:24.530640 | orchestrator | 2026-04-08 01:31:24 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:31:26.831315 | orchestrator | 2026-04-08 01:31:26 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:31:29.135352 | orchestrator | 2026-04-08 01:31:29 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:31:31.360053 | orchestrator | 2026-04-08 01:31:31 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) is still in progress 2026-04-08 01:31:33.665147 | orchestrator | 2026-04-08 01:31:33 | INFO  | Live migration of 10eba327-f3ab-4804-929c-4481eb28ac05 (test-3) completed with status ACTIVE 2026-04-08 01:31:33.665234 | orchestrator | 2026-04-08 01:31:33 | INFO  | Live migrating server b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 2026-04-08 01:31:43.447134 | orchestrator | 2026-04-08 01:31:43 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:31:45.764822 | orchestrator | 2026-04-08 01:31:45 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:31:48.068528 | orchestrator | 2026-04-08 01:31:48 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:31:50.478002 | orchestrator | 2026-04-08 01:31:50 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:31:52.881311 | orchestrator | 2026-04-08 01:31:52 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:31:55.106904 | orchestrator | 2026-04-08 01:31:55 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:31:57.461439 | orchestrator | 2026-04-08 01:31:57 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:31:59.727776 | orchestrator | 2026-04-08 01:31:59 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) is still in progress 2026-04-08 01:32:02.033496 | orchestrator | 2026-04-08 01:32:02 | INFO  | Live migration of b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 (test-1) completed with status ACTIVE 2026-04-08 01:32:02.033580 | orchestrator | 2026-04-08 01:32:02 | INFO  | Live migrating server 0fcefdab-da1f-4354-9676-0e7a1a012b1e 2026-04-08 01:32:11.718340 | orchestrator | 2026-04-08 01:32:11 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:32:14.032919 | orchestrator | 2026-04-08 01:32:14 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:32:16.411923 | orchestrator | 2026-04-08 01:32:16 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:32:18.804948 | orchestrator | 2026-04-08 01:32:18 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:32:21.261952 | orchestrator | 2026-04-08 01:32:21 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:32:23.530431 | orchestrator | 2026-04-08 01:32:23 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:32:25.812010 | orchestrator | 2026-04-08 01:32:25 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:32:28.201831 | orchestrator | 2026-04-08 01:32:28 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:32:30.531185 | orchestrator | 2026-04-08 01:32:30 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:32:32.790101 | orchestrator | 2026-04-08 01:32:32 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) is still in progress 2026-04-08 01:32:35.227923 | orchestrator | 2026-04-08 01:32:35 | INFO  | Live migration of 0fcefdab-da1f-4354-9676-0e7a1a012b1e (test) completed with status ACTIVE 2026-04-08 01:32:35.228804 | orchestrator | 2026-04-08 01:32:35 | INFO  | Live migrating server aa5b037b-51d0-4bb7-a12d-285404dd660c 2026-04-08 01:32:46.465067 | orchestrator | 2026-04-08 01:32:46 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:32:48.792970 | orchestrator | 2026-04-08 01:32:48 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:32:51.279673 | orchestrator | 2026-04-08 01:32:51 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:32:53.868395 | orchestrator | 2026-04-08 01:32:53 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:32:56.103355 | orchestrator | 2026-04-08 01:32:56 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:32:58.305872 | orchestrator | 2026-04-08 01:32:58 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:33:00.709187 | orchestrator | 2026-04-08 01:33:00 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:33:03.111185 | orchestrator | 2026-04-08 01:33:03 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) is still in progress 2026-04-08 01:33:05.385423 | orchestrator | 2026-04-08 01:33:05 | INFO  | Live migration of aa5b037b-51d0-4bb7-a12d-285404dd660c (test-2) completed with status ACTIVE 2026-04-08 01:33:05.702883 | orchestrator | + compute_list 2026-04-08 01:33:05.702954 | orchestrator | + osism manage compute list testbed-node-3 2026-04-08 01:33:07.252346 | orchestrator | 2026-04-08 01:33:07 | ERROR  | Unable to get ansible vault password 2026-04-08 01:33:07.252437 | orchestrator | 2026-04-08 01:33:07 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:33:07.252448 | orchestrator | 2026-04-08 01:33:07 | ERROR  | Dropping encrypted entries 2026-04-08 01:33:08.274821 | orchestrator | +------+--------+----------+ 2026-04-08 01:33:08.274917 | orchestrator | | ID | Name | Status | 2026-04-08 01:33:08.274928 | orchestrator | |------+--------+----------| 2026-04-08 01:33:08.274934 | orchestrator | +------+--------+----------+ 2026-04-08 01:33:08.565744 | orchestrator | + osism manage compute list testbed-node-4 2026-04-08 01:33:10.123716 | orchestrator | 2026-04-08 01:33:10 | ERROR  | Unable to get ansible vault password 2026-04-08 01:33:10.123779 | orchestrator | 2026-04-08 01:33:10 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:33:10.123790 | orchestrator | 2026-04-08 01:33:10 | ERROR  | Dropping encrypted entries 2026-04-08 01:33:11.233860 | orchestrator | +------+--------+----------+ 2026-04-08 01:33:11.233959 | orchestrator | | ID | Name | Status | 2026-04-08 01:33:11.233969 | orchestrator | |------+--------+----------| 2026-04-08 01:33:11.233977 | orchestrator | +------+--------+----------+ 2026-04-08 01:33:11.563934 | orchestrator | + osism manage compute list testbed-node-5 2026-04-08 01:33:13.262793 | orchestrator | 2026-04-08 01:33:13 | ERROR  | Unable to get ansible vault password 2026-04-08 01:33:13.262872 | orchestrator | 2026-04-08 01:33:13 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-08 01:33:13.262905 | orchestrator | 2026-04-08 01:33:13 | ERROR  | Dropping encrypted entries 2026-04-08 01:33:14.849017 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-08 01:33:14.849112 | orchestrator | | ID | Name | Status | 2026-04-08 01:33:14.849122 | orchestrator | |--------------------------------------+--------+----------| 2026-04-08 01:33:14.849129 | orchestrator | | 38eab6d1-3924-435d-97d6-9471ea65c757 | test-4 | ACTIVE | 2026-04-08 01:33:14.849139 | orchestrator | | 10eba327-f3ab-4804-929c-4481eb28ac05 | test-3 | ACTIVE | 2026-04-08 01:33:14.849146 | orchestrator | | b75e2c17-c4f3-41f1-a8f3-19e8933b8c69 | test-1 | ACTIVE | 2026-04-08 01:33:14.849152 | orchestrator | | 0fcefdab-da1f-4354-9676-0e7a1a012b1e | test | ACTIVE | 2026-04-08 01:33:14.849159 | orchestrator | | aa5b037b-51d0-4bb7-a12d-285404dd660c | test-2 | ACTIVE | 2026-04-08 01:33:14.849165 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-08 01:33:15.168656 | orchestrator | + server_ping 2026-04-08 01:33:15.169989 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-08 01:33:15.170273 | orchestrator | ++ tr -d '\r' 2026-04-08 01:33:18.091109 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:33:18.091198 | orchestrator | + ping -c3 192.168.112.168 2026-04-08 01:33:18.105328 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2026-04-08 01:33:18.105414 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=9.15 ms 2026-04-08 01:33:19.099511 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.03 ms 2026-04-08 01:33:20.100957 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=2.01 ms 2026-04-08 01:33:20.101033 | orchestrator | 2026-04-08 01:33:20.101040 | orchestrator | --- 192.168.112.168 ping statistics --- 2026-04-08 01:33:20.101046 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:33:20.101051 | orchestrator | rtt min/avg/max/mdev = 2.011/4.395/9.151/3.362 ms 2026-04-08 01:33:20.101471 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:33:20.101483 | orchestrator | + ping -c3 192.168.112.142 2026-04-08 01:33:20.112215 | orchestrator | PING 192.168.112.142 (192.168.112.142) 56(84) bytes of data. 2026-04-08 01:33:20.112303 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=1 ttl=63 time=5.57 ms 2026-04-08 01:33:21.110613 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=2 ttl=63 time=2.08 ms 2026-04-08 01:33:22.113028 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=3 ttl=63 time=2.87 ms 2026-04-08 01:33:22.113109 | orchestrator | 2026-04-08 01:33:22.113116 | orchestrator | --- 192.168.112.142 ping statistics --- 2026-04-08 01:33:22.113136 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:33:22.113141 | orchestrator | rtt min/avg/max/mdev = 2.081/3.506/5.565/1.491 ms 2026-04-08 01:33:22.113574 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:33:22.113591 | orchestrator | + ping -c3 192.168.112.103 2026-04-08 01:33:22.125432 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-04-08 01:33:22.125516 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=7.17 ms 2026-04-08 01:33:23.122273 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.50 ms 2026-04-08 01:33:24.124100 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.79 ms 2026-04-08 01:33:24.124221 | orchestrator | 2026-04-08 01:33:24.124233 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-04-08 01:33:24.124241 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:33:24.124248 | orchestrator | rtt min/avg/max/mdev = 1.792/3.817/7.165/2.384 ms 2026-04-08 01:33:24.124289 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:33:24.124298 | orchestrator | + ping -c3 192.168.112.182 2026-04-08 01:33:24.135733 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-04-08 01:33:24.135826 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=7.16 ms 2026-04-08 01:33:25.132822 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.71 ms 2026-04-08 01:33:26.134653 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=2.04 ms 2026-04-08 01:33:26.134724 | orchestrator | 2026-04-08 01:33:26.134732 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-04-08 01:33:26.134738 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-08 01:33:26.134742 | orchestrator | rtt min/avg/max/mdev = 2.042/3.971/7.160/2.271 ms 2026-04-08 01:33:26.134748 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-08 01:33:26.134753 | orchestrator | + ping -c3 192.168.112.191 2026-04-08 01:33:26.143919 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2026-04-08 01:33:26.144009 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=4.95 ms 2026-04-08 01:33:27.142328 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.19 ms 2026-04-08 01:33:28.143860 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=1.50 ms 2026-04-08 01:33:28.143943 | orchestrator | 2026-04-08 01:33:28.143953 | orchestrator | --- 192.168.112.191 ping statistics --- 2026-04-08 01:33:28.143959 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-08 01:33:28.143964 | orchestrator | rtt min/avg/max/mdev = 1.504/2.879/4.948/1.489 ms 2026-04-08 01:33:28.241150 | orchestrator | ok: Runtime: 0:18:39.875855 2026-04-08 01:33:28.282160 | 2026-04-08 01:33:28.282299 | TASK [Run tempest] 2026-04-08 01:33:29.048905 | orchestrator | 2026-04-08 01:33:29.048990 | orchestrator | # Tempest 2026-04-08 01:33:29.048999 | orchestrator | 2026-04-08 01:33:29.049004 | orchestrator | + set -e 2026-04-08 01:33:29.049010 | orchestrator | + source /opt/manager-vars.sh 2026-04-08 01:33:29.049017 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-08 01:33:29.049023 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-08 01:33:29.049041 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-08 01:33:29.049049 | orchestrator | ++ CEPH_VERSION=reef 2026-04-08 01:33:29.049055 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-08 01:33:29.049060 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-08 01:33:29.049068 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-08 01:33:29.049074 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-08 01:33:29.049078 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-08 01:33:29.049084 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-08 01:33:29.049088 | orchestrator | ++ export ARA=false 2026-04-08 01:33:29.049092 | orchestrator | ++ ARA=false 2026-04-08 01:33:29.049100 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-08 01:33:29.049105 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-08 01:33:29.049108 | orchestrator | ++ export TEMPEST=true 2026-04-08 01:33:29.049115 | orchestrator | ++ TEMPEST=true 2026-04-08 01:33:29.049118 | orchestrator | ++ export IS_ZUUL=true 2026-04-08 01:33:29.049122 | orchestrator | ++ IS_ZUUL=true 2026-04-08 01:33:29.049127 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.187 2026-04-08 01:33:29.049131 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.187 2026-04-08 01:33:29.049134 | orchestrator | ++ export EXTERNAL_API=false 2026-04-08 01:33:29.049138 | orchestrator | ++ EXTERNAL_API=false 2026-04-08 01:33:29.049142 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-08 01:33:29.049146 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-08 01:33:29.049150 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-08 01:33:29.049153 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-08 01:33:29.049157 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-08 01:33:29.049161 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-08 01:33:29.049165 | orchestrator | + echo 2026-04-08 01:33:29.049169 | orchestrator | + echo '# Tempest' 2026-04-08 01:33:29.049173 | orchestrator | + echo 2026-04-08 01:33:29.049176 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-04-08 01:33:29.049180 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-04-08 01:33:40.496558 | orchestrator | 2026-04-08 01:33:40 | INFO  | Prepare task for execution of tempest. 2026-04-08 01:33:40.571814 | orchestrator | 2026-04-08 01:33:40 | INFO  | Task 18ce6a2b-1411-495f-b133-d72fbbf42a42 (tempest) was prepared for execution. 2026-04-08 01:33:40.571919 | orchestrator | 2026-04-08 01:33:40 | INFO  | It takes a moment until task 18ce6a2b-1411-495f-b133-d72fbbf42a42 (tempest) has been started and output is visible here. 2026-04-08 01:34:58.422475 | orchestrator | 2026-04-08 01:34:58.422609 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-04-08 01:34:58.422626 | orchestrator | 2026-04-08 01:34:58.422637 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-04-08 01:34:58.422657 | orchestrator | Wednesday 08 April 2026 01:33:44 +0000 (0:00:00.329) 0:00:00.329 ******* 2026-04-08 01:34:58.422667 | orchestrator | changed: [testbed-manager] 2026-04-08 01:34:58.422678 | orchestrator | 2026-04-08 01:34:58.422688 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-04-08 01:34:58.422697 | orchestrator | Wednesday 08 April 2026 01:33:45 +0000 (0:00:01.119) 0:00:01.449 ******* 2026-04-08 01:34:58.422707 | orchestrator | changed: [testbed-manager] 2026-04-08 01:34:58.422717 | orchestrator | 2026-04-08 01:34:58.422727 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-04-08 01:34:58.422737 | orchestrator | Wednesday 08 April 2026 01:33:46 +0000 (0:00:01.217) 0:00:02.667 ******* 2026-04-08 01:34:58.422746 | orchestrator | ok: [testbed-manager] 2026-04-08 01:34:58.422757 | orchestrator | 2026-04-08 01:34:58.422767 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-04-08 01:34:58.422776 | orchestrator | Wednesday 08 April 2026 01:33:46 +0000 (0:00:00.419) 0:00:03.086 ******* 2026-04-08 01:34:58.422786 | orchestrator | changed: [testbed-manager] 2026-04-08 01:34:58.422817 | orchestrator | 2026-04-08 01:34:58.422836 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-04-08 01:34:58.422854 | orchestrator | Wednesday 08 April 2026 01:34:08 +0000 (0:00:21.615) 0:00:24.701 ******* 2026-04-08 01:34:58.422890 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-04-08 01:34:58.422900 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-04-08 01:34:58.422914 | orchestrator | 2026-04-08 01:34:58.422923 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-04-08 01:34:58.422933 | orchestrator | Wednesday 08 April 2026 01:34:16 +0000 (0:00:07.840) 0:00:32.542 ******* 2026-04-08 01:34:58.422943 | orchestrator | ok: [testbed-manager] => { 2026-04-08 01:34:58.422953 | orchestrator |  "changed": false, 2026-04-08 01:34:58.422962 | orchestrator |  "msg": "All assertions passed" 2026-04-08 01:34:58.422972 | orchestrator | } 2026-04-08 01:34:58.422982 | orchestrator | 2026-04-08 01:34:58.422991 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-04-08 01:34:58.423001 | orchestrator | Wednesday 08 April 2026 01:34:16 +0000 (0:00:00.149) 0:00:32.692 ******* 2026-04-08 01:34:58.423010 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 01:34:58.423020 | orchestrator | 2026-04-08 01:34:58.423030 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-04-08 01:34:58.423039 | orchestrator | Wednesday 08 April 2026 01:34:19 +0000 (0:00:03.599) 0:00:36.291 ******* 2026-04-08 01:34:58.423049 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 01:34:58.423058 | orchestrator | 2026-04-08 01:34:58.423068 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-04-08 01:34:58.423077 | orchestrator | Wednesday 08 April 2026 01:34:21 +0000 (0:00:01.941) 0:00:38.233 ******* 2026-04-08 01:34:58.423087 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 01:34:58.423096 | orchestrator | 2026-04-08 01:34:58.423106 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-04-08 01:34:58.423115 | orchestrator | Wednesday 08 April 2026 01:34:25 +0000 (0:00:03.794) 0:00:42.027 ******* 2026-04-08 01:34:58.423125 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 01:34:58.423134 | orchestrator | 2026-04-08 01:34:58.423144 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-04-08 01:34:58.423153 | orchestrator | Wednesday 08 April 2026 01:34:25 +0000 (0:00:00.193) 0:00:42.220 ******* 2026-04-08 01:34:58.423163 | orchestrator | changed: [testbed-manager] 2026-04-08 01:34:58.423173 | orchestrator | 2026-04-08 01:34:58.423182 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-04-08 01:34:58.423192 | orchestrator | Wednesday 08 April 2026 01:34:28 +0000 (0:00:02.642) 0:00:44.863 ******* 2026-04-08 01:34:58.423202 | orchestrator | changed: [testbed-manager] 2026-04-08 01:34:58.423211 | orchestrator | 2026-04-08 01:34:58.423221 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-04-08 01:34:58.423230 | orchestrator | Wednesday 08 April 2026 01:34:37 +0000 (0:00:09.158) 0:00:54.022 ******* 2026-04-08 01:34:58.423240 | orchestrator | changed: [testbed-manager] 2026-04-08 01:34:58.423249 | orchestrator | 2026-04-08 01:34:58.423258 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-04-08 01:34:58.423268 | orchestrator | Wednesday 08 April 2026 01:34:38 +0000 (0:00:00.812) 0:00:54.834 ******* 2026-04-08 01:34:58.423277 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 01:34:58.423287 | orchestrator | 2026-04-08 01:34:58.423296 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-04-08 01:34:58.423306 | orchestrator | Wednesday 08 April 2026 01:34:40 +0000 (0:00:01.544) 0:00:56.379 ******* 2026-04-08 01:34:58.423315 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 01:34:58.423325 | orchestrator | 2026-04-08 01:34:58.423334 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-04-08 01:34:58.423344 | orchestrator | Wednesday 08 April 2026 01:34:41 +0000 (0:00:01.564) 0:00:57.944 ******* 2026-04-08 01:34:58.423353 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 01:34:58.423362 | orchestrator | 2026-04-08 01:34:58.423372 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-04-08 01:34:58.423389 | orchestrator | Wednesday 08 April 2026 01:34:41 +0000 (0:00:00.185) 0:00:58.130 ******* 2026-04-08 01:34:58.423398 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 01:34:58.423408 | orchestrator | 2026-04-08 01:34:58.423426 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-04-08 01:34:58.423436 | orchestrator | Wednesday 08 April 2026 01:34:42 +0000 (0:00:00.391) 0:00:58.521 ******* 2026-04-08 01:34:58.423446 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 01:34:58.423455 | orchestrator | 2026-04-08 01:34:58.423465 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-04-08 01:34:58.423524 | orchestrator | Wednesday 08 April 2026 01:34:46 +0000 (0:00:04.253) 0:01:02.775 ******* 2026-04-08 01:34:58.423536 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-04-08 01:34:58.423546 | orchestrator |  "changed": false, 2026-04-08 01:34:58.423556 | orchestrator |  "msg": "All assertions passed" 2026-04-08 01:34:58.423566 | orchestrator | } 2026-04-08 01:34:58.423575 | orchestrator | 2026-04-08 01:34:58.423586 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-04-08 01:34:58.423596 | orchestrator | Wednesday 08 April 2026 01:34:46 +0000 (0:00:00.184) 0:01:02.960 ******* 2026-04-08 01:34:58.423606 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-04-08 01:34:58.423617 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-04-08 01:34:58.423627 | orchestrator | skipping: [testbed-manager] 2026-04-08 01:34:58.423637 | orchestrator | 2026-04-08 01:34:58.423646 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-04-08 01:34:58.423656 | orchestrator | Wednesday 08 April 2026 01:34:46 +0000 (0:00:00.181) 0:01:03.141 ******* 2026-04-08 01:34:58.423665 | orchestrator | skipping: [testbed-manager] 2026-04-08 01:34:58.423675 | orchestrator | 2026-04-08 01:34:58.423684 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-04-08 01:34:58.423694 | orchestrator | Wednesday 08 April 2026 01:34:47 +0000 (0:00:00.157) 0:01:03.299 ******* 2026-04-08 01:34:58.423704 | orchestrator | ok: [testbed-manager] 2026-04-08 01:34:58.423713 | orchestrator | 2026-04-08 01:34:58.423723 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-04-08 01:34:58.423733 | orchestrator | Wednesday 08 April 2026 01:34:47 +0000 (0:00:00.523) 0:01:03.822 ******* 2026-04-08 01:34:58.423742 | orchestrator | changed: [testbed-manager] 2026-04-08 01:34:58.423752 | orchestrator | 2026-04-08 01:34:58.423762 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-04-08 01:34:58.423771 | orchestrator | Wednesday 08 April 2026 01:34:48 +0000 (0:00:00.943) 0:01:04.766 ******* 2026-04-08 01:34:58.423781 | orchestrator | ok: [testbed-manager] 2026-04-08 01:34:58.423790 | orchestrator | 2026-04-08 01:34:58.423800 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-04-08 01:34:58.423810 | orchestrator | Wednesday 08 April 2026 01:34:48 +0000 (0:00:00.449) 0:01:05.216 ******* 2026-04-08 01:34:58.423819 | orchestrator | skipping: [testbed-manager] 2026-04-08 01:34:58.423829 | orchestrator | 2026-04-08 01:34:58.423838 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-04-08 01:34:58.423848 | orchestrator | Wednesday 08 April 2026 01:34:49 +0000 (0:00:00.312) 0:01:05.529 ******* 2026-04-08 01:34:58.423858 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-04-08 01:34:58.423868 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-04-08 01:34:58.423877 | orchestrator | 2026-04-08 01:34:58.423887 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-04-08 01:34:58.423897 | orchestrator | Wednesday 08 April 2026 01:34:57 +0000 (0:00:08.114) 0:01:13.644 ******* 2026-04-08 01:34:58.423907 | orchestrator | changed: [testbed-manager] 2026-04-08 01:34:58.423923 | orchestrator | 2026-04-08 01:34:58.423933 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 01:34:58.423943 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-08 01:34:58.423953 | orchestrator | 2026-04-08 01:34:58.423963 | orchestrator | 2026-04-08 01:34:58.423972 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 01:34:58.423982 | orchestrator | Wednesday 08 April 2026 01:34:58 +0000 (0:00:01.058) 0:01:14.702 ******* 2026-04-08 01:34:58.423992 | orchestrator | =============================================================================== 2026-04-08 01:34:58.424001 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 21.62s 2026-04-08 01:34:58.424011 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 9.16s 2026-04-08 01:34:58.424020 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 8.11s 2026-04-08 01:34:58.424030 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 7.84s 2026-04-08 01:34:58.424046 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 4.25s 2026-04-08 01:34:58.424056 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.79s 2026-04-08 01:34:58.424066 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.60s 2026-04-08 01:34:58.424075 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.64s 2026-04-08 01:34:58.424085 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.94s 2026-04-08 01:34:58.424095 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.56s 2026-04-08 01:34:58.424105 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.54s 2026-04-08 01:34:58.424115 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.22s 2026-04-08 01:34:58.424127 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.12s 2026-04-08 01:34:58.424144 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.06s 2026-04-08 01:34:58.424164 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.94s 2026-04-08 01:34:58.424188 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.81s 2026-04-08 01:34:58.424204 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.52s 2026-04-08 01:34:58.424230 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.45s 2026-04-08 01:34:58.689319 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.42s 2026-04-08 01:34:58.689399 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.39s 2026-04-08 01:34:58.916660 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-04-08 01:34:58.922084 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-04-08 01:34:58.927998 | orchestrator | 2026-04-08 01:34:58.928051 | orchestrator | ## IDENTITY (API) 2026-04-08 01:34:58.928057 | orchestrator | 2026-04-08 01:34:58.928061 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-08 01:34:58.928065 | orchestrator | + echo 2026-04-08 01:34:58.928069 | orchestrator | + echo '## IDENTITY (API)' 2026-04-08 01:34:58.928073 | orchestrator | + echo 2026-04-08 01:34:58.928082 | orchestrator | + _tempest tempest.api.identity.v3 2026-04-08 01:34:58.928087 | orchestrator | + local regex=tempest.api.identity.v3 2026-04-08 01:34:58.929626 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-04-08 01:34:58.930761 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-08 01:34:58.934267 | orchestrator | + tee -a /opt/tempest/20260408-0134.log 2026-04-08 01:35:02.821761 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-08 01:35:02.821850 | orchestrator | Did you mean one of these? 2026-04-08 01:35:02.821859 | orchestrator | help 2026-04-08 01:35:02.821864 | orchestrator | init 2026-04-08 01:35:03.233182 | orchestrator | 2026-04-08 01:35:03.233245 | orchestrator | ## IMAGE (API) 2026-04-08 01:35:03.233255 | orchestrator | 2026-04-08 01:35:03.233262 | orchestrator | + echo 2026-04-08 01:35:03.233269 | orchestrator | + echo '## IMAGE (API)' 2026-04-08 01:35:03.233276 | orchestrator | + echo 2026-04-08 01:35:03.233283 | orchestrator | + _tempest tempest.api.image.v2 2026-04-08 01:35:03.233290 | orchestrator | + local regex=tempest.api.image.v2 2026-04-08 01:35:03.233775 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-04-08 01:35:03.234615 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-08 01:35:03.237608 | orchestrator | + tee -a /opt/tempest/20260408-0135.log 2026-04-08 01:35:06.773114 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-08 01:35:06.773218 | orchestrator | Did you mean one of these? 2026-04-08 01:35:06.773230 | orchestrator | help 2026-04-08 01:35:06.773238 | orchestrator | init 2026-04-08 01:35:07.132738 | orchestrator | 2026-04-08 01:35:07.132842 | orchestrator | ## NETWORK (API) 2026-04-08 01:35:07.132852 | orchestrator | 2026-04-08 01:35:07.132860 | orchestrator | + echo 2026-04-08 01:35:07.132887 | orchestrator | + echo '## NETWORK (API)' 2026-04-08 01:35:07.132895 | orchestrator | + echo 2026-04-08 01:35:07.132903 | orchestrator | + _tempest tempest.api.network 2026-04-08 01:35:07.132910 | orchestrator | + local regex=tempest.api.network 2026-04-08 01:35:07.133794 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-04-08 01:35:07.133865 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-08 01:35:07.145858 | orchestrator | + tee -a /opt/tempest/20260408-0135.log 2026-04-08 01:35:10.748396 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-08 01:35:10.748524 | orchestrator | Did you mean one of these? 2026-04-08 01:35:10.748538 | orchestrator | help 2026-04-08 01:35:10.748545 | orchestrator | init 2026-04-08 01:35:11.120789 | orchestrator | 2026-04-08 01:35:11.120847 | orchestrator | ## VOLUME (API) 2026-04-08 01:35:11.120856 | orchestrator | 2026-04-08 01:35:11.120862 | orchestrator | + echo 2026-04-08 01:35:11.120868 | orchestrator | + echo '## VOLUME (API)' 2026-04-08 01:35:11.120875 | orchestrator | + echo 2026-04-08 01:35:11.120882 | orchestrator | + _tempest tempest.api.volume 2026-04-08 01:35:11.120889 | orchestrator | + local regex=tempest.api.volume 2026-04-08 01:35:11.122266 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-04-08 01:35:11.123150 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-08 01:35:11.125670 | orchestrator | + tee -a /opt/tempest/20260408-0135.log 2026-04-08 01:35:14.905279 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-08 01:35:14.905407 | orchestrator | Did you mean one of these? 2026-04-08 01:35:14.905418 | orchestrator | help 2026-04-08 01:35:14.905425 | orchestrator | init 2026-04-08 01:35:15.306613 | orchestrator | 2026-04-08 01:35:15.306738 | orchestrator | ## COMPUTE (API) 2026-04-08 01:35:15.306749 | orchestrator | 2026-04-08 01:35:15.306754 | orchestrator | + echo 2026-04-08 01:35:15.306759 | orchestrator | + echo '## COMPUTE (API)' 2026-04-08 01:35:15.306764 | orchestrator | + echo 2026-04-08 01:35:15.306768 | orchestrator | + _tempest tempest.api.compute 2026-04-08 01:35:15.306803 | orchestrator | + local regex=tempest.api.compute 2026-04-08 01:35:15.307034 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-04-08 01:35:15.310175 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-08 01:35:15.312973 | orchestrator | + tee -a /opt/tempest/20260408-0135.log 2026-04-08 01:35:18.866179 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-08 01:35:18.866231 | orchestrator | Did you mean one of these? 2026-04-08 01:35:18.866237 | orchestrator | help 2026-04-08 01:35:18.866241 | orchestrator | init 2026-04-08 01:35:19.256090 | orchestrator | 2026-04-08 01:35:19.256150 | orchestrator | ## DNS (API) 2026-04-08 01:35:19.256156 | orchestrator | 2026-04-08 01:35:19.256160 | orchestrator | + echo 2026-04-08 01:35:19.256164 | orchestrator | + echo '## DNS (API)' 2026-04-08 01:35:19.256168 | orchestrator | + echo 2026-04-08 01:35:19.256173 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-04-08 01:35:19.256177 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-04-08 01:35:19.257348 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-04-08 01:35:19.258908 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-08 01:35:19.263211 | orchestrator | + tee -a /opt/tempest/20260408-0135.log 2026-04-08 01:35:22.996220 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-08 01:35:22.996273 | orchestrator | Did you mean one of these? 2026-04-08 01:35:22.996280 | orchestrator | help 2026-04-08 01:35:22.996285 | orchestrator | init 2026-04-08 01:35:23.388419 | orchestrator | 2026-04-08 01:35:23.388505 | orchestrator | ## OBJECT-STORE (API) 2026-04-08 01:35:23.388516 | orchestrator | 2026-04-08 01:35:23.388523 | orchestrator | + echo 2026-04-08 01:35:23.388529 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-04-08 01:35:23.388536 | orchestrator | + echo 2026-04-08 01:35:23.388542 | orchestrator | + _tempest tempest.api.object_storage 2026-04-08 01:35:23.388549 | orchestrator | + local regex=tempest.api.object_storage 2026-04-08 01:35:23.389619 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-04-08 01:35:23.389751 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-08 01:35:23.392796 | orchestrator | + tee -a /opt/tempest/20260408-0135.log 2026-04-08 01:35:27.002898 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-08 01:35:27.003006 | orchestrator | Did you mean one of these? 2026-04-08 01:35:27.003020 | orchestrator | help 2026-04-08 01:35:27.003028 | orchestrator | init 2026-04-08 01:35:27.888508 | orchestrator | ok: Runtime: 0:01:58.744505 2026-04-08 01:35:27.910692 | 2026-04-08 01:35:27.910886 | TASK [Check prometheus alert status] 2026-04-08 01:35:28.448104 | orchestrator | skipping: Conditional result was False 2026-04-08 01:35:28.451274 | 2026-04-08 01:35:28.451488 | PLAY RECAP 2026-04-08 01:35:28.451644 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-04-08 01:35:28.451742 | 2026-04-08 01:35:28.694565 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-08 01:35:28.695811 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-08 01:35:29.467757 | 2026-04-08 01:35:29.467926 | PLAY [Post output play] 2026-04-08 01:35:29.484177 | 2026-04-08 01:35:29.484320 | LOOP [stage-output : Register sources] 2026-04-08 01:35:29.561592 | 2026-04-08 01:35:29.561910 | TASK [stage-output : Check sudo] 2026-04-08 01:35:30.486359 | orchestrator | sudo: a password is required 2026-04-08 01:35:30.598618 | orchestrator | ok: Runtime: 0:00:00.016068 2026-04-08 01:35:30.614276 | 2026-04-08 01:35:30.614444 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-08 01:35:30.646794 | 2026-04-08 01:35:30.647113 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-08 01:35:30.715701 | orchestrator | ok 2026-04-08 01:35:30.724410 | 2026-04-08 01:35:30.724543 | LOOP [stage-output : Ensure target folders exist] 2026-04-08 01:35:31.266989 | orchestrator | ok: "docs" 2026-04-08 01:35:31.267320 | 2026-04-08 01:35:31.587355 | orchestrator | ok: "artifacts" 2026-04-08 01:35:31.900499 | orchestrator | ok: "logs" 2026-04-08 01:35:31.924996 | 2026-04-08 01:35:31.925187 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-08 01:35:31.966823 | 2026-04-08 01:35:31.967173 | TASK [stage-output : Make all log files readable] 2026-04-08 01:35:32.353839 | orchestrator | ok 2026-04-08 01:35:32.366773 | 2026-04-08 01:35:32.367079 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-08 01:35:32.404056 | orchestrator | skipping: Conditional result was False 2026-04-08 01:35:32.420245 | 2026-04-08 01:35:32.420385 | TASK [stage-output : Discover log files for compression] 2026-04-08 01:35:32.444495 | orchestrator | skipping: Conditional result was False 2026-04-08 01:35:32.457657 | 2026-04-08 01:35:32.457815 | LOOP [stage-output : Archive everything from logs] 2026-04-08 01:35:32.499153 | 2026-04-08 01:35:32.499322 | PLAY [Post cleanup play] 2026-04-08 01:35:32.507799 | 2026-04-08 01:35:32.507903 | TASK [Set cloud fact (Zuul deployment)] 2026-04-08 01:35:32.562426 | orchestrator | ok 2026-04-08 01:35:32.573321 | 2026-04-08 01:35:32.573440 | TASK [Set cloud fact (local deployment)] 2026-04-08 01:35:32.597119 | orchestrator | skipping: Conditional result was False 2026-04-08 01:35:32.608839 | 2026-04-08 01:35:32.608996 | TASK [Clean the cloud environment] 2026-04-08 01:35:33.338934 | orchestrator | 2026-04-08 01:35:33 - clean up servers 2026-04-08 01:35:34.108745 | orchestrator | 2026-04-08 01:35:34 - testbed-manager 2026-04-08 01:35:34.193862 | orchestrator | 2026-04-08 01:35:34 - testbed-node-0 2026-04-08 01:35:34.279062 | orchestrator | 2026-04-08 01:35:34 - testbed-node-3 2026-04-08 01:35:34.362410 | orchestrator | 2026-04-08 01:35:34 - testbed-node-5 2026-04-08 01:35:34.453783 | orchestrator | 2026-04-08 01:35:34 - testbed-node-2 2026-04-08 01:35:34.550648 | orchestrator | 2026-04-08 01:35:34 - testbed-node-1 2026-04-08 01:35:34.637913 | orchestrator | 2026-04-08 01:35:34 - testbed-node-4 2026-04-08 01:35:34.727972 | orchestrator | 2026-04-08 01:35:34 - clean up keypairs 2026-04-08 01:35:34.746482 | orchestrator | 2026-04-08 01:35:34 - testbed 2026-04-08 01:35:34.775407 | orchestrator | 2026-04-08 01:35:34 - wait for servers to be gone 2026-04-08 01:35:47.819319 | orchestrator | 2026-04-08 01:35:47 - clean up ports 2026-04-08 01:35:48.025295 | orchestrator | 2026-04-08 01:35:48 - 21b0622c-60e1-4f72-bade-4bd879236f53 2026-04-08 01:35:48.333689 | orchestrator | 2026-04-08 01:35:48 - 361b4f0f-7b76-4e10-9692-584fc9a72f49 2026-04-08 01:35:48.627984 | orchestrator | 2026-04-08 01:35:48 - bec27311-8567-4faf-9986-7c89c9d7c5b8 2026-04-08 01:35:48.843072 | orchestrator | 2026-04-08 01:35:48 - c0c2a58a-ee1b-4110-8e1f-6555d89b2e43 2026-04-08 01:35:49.054780 | orchestrator | 2026-04-08 01:35:49 - e8e4eec1-1d36-4f6d-ac35-d2628901f62f 2026-04-08 01:35:49.256825 | orchestrator | 2026-04-08 01:35:49 - f3456665-6351-4329-b146-348d83e0bd20 2026-04-08 01:35:49.462848 | orchestrator | 2026-04-08 01:35:49 - ff14727d-c365-4db2-b7ef-f3026036cf06 2026-04-08 01:35:49.933574 | orchestrator | 2026-04-08 01:35:49 - clean up volumes 2026-04-08 01:35:50.060168 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-2-node-base 2026-04-08 01:35:50.101240 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-0-node-base 2026-04-08 01:35:50.141313 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-3-node-base 2026-04-08 01:35:50.180202 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-4-node-base 2026-04-08 01:35:50.225920 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-5-node-base 2026-04-08 01:35:50.264277 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-1-node-base 2026-04-08 01:35:50.306108 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-1-node-4 2026-04-08 01:35:50.350154 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-0-node-3 2026-04-08 01:35:50.389504 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-manager-base 2026-04-08 01:35:50.428311 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-2-node-5 2026-04-08 01:35:50.471083 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-3-node-3 2026-04-08 01:35:50.513331 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-6-node-3 2026-04-08 01:35:50.552725 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-7-node-4 2026-04-08 01:35:50.589551 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-5-node-5 2026-04-08 01:35:50.627258 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-4-node-4 2026-04-08 01:35:50.664176 | orchestrator | 2026-04-08 01:35:50 - testbed-volume-8-node-5 2026-04-08 01:35:50.702844 | orchestrator | 2026-04-08 01:35:50 - disconnect routers 2026-04-08 01:35:50.812082 | orchestrator | 2026-04-08 01:35:50 - testbed 2026-04-08 01:35:51.861688 | orchestrator | 2026-04-08 01:35:51 - clean up subnets 2026-04-08 01:35:51.913819 | orchestrator | 2026-04-08 01:35:51 - subnet-testbed-management 2026-04-08 01:35:52.081856 | orchestrator | 2026-04-08 01:35:52 - clean up networks 2026-04-08 01:35:52.260651 | orchestrator | 2026-04-08 01:35:52 - net-testbed-management 2026-04-08 01:35:52.547892 | orchestrator | 2026-04-08 01:35:52 - clean up security groups 2026-04-08 01:35:52.596315 | orchestrator | 2026-04-08 01:35:52 - testbed-node 2026-04-08 01:35:52.700620 | orchestrator | 2026-04-08 01:35:52 - testbed-management 2026-04-08 01:35:52.821147 | orchestrator | 2026-04-08 01:35:52 - clean up floating ips 2026-04-08 01:35:52.871962 | orchestrator | 2026-04-08 01:35:52 - 81.163.193.187 2026-04-08 01:35:53.312782 | orchestrator | 2026-04-08 01:35:53 - clean up routers 2026-04-08 01:35:53.436457 | orchestrator | 2026-04-08 01:35:53 - testbed 2026-04-08 01:35:55.168485 | orchestrator | ok: Runtime: 0:00:21.801989 2026-04-08 01:35:55.173272 | 2026-04-08 01:35:55.173443 | PLAY RECAP 2026-04-08 01:35:55.173556 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-08 01:35:55.173608 | 2026-04-08 01:35:55.321815 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-08 01:35:55.322980 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-08 01:35:56.074059 | 2026-04-08 01:35:56.074223 | PLAY [Cleanup play] 2026-04-08 01:35:56.090336 | 2026-04-08 01:35:56.090477 | TASK [Set cloud fact (Zuul deployment)] 2026-04-08 01:35:56.148897 | orchestrator | ok 2026-04-08 01:35:56.158347 | 2026-04-08 01:35:56.158560 | TASK [Set cloud fact (local deployment)] 2026-04-08 01:35:56.193592 | orchestrator | skipping: Conditional result was False 2026-04-08 01:35:56.209350 | 2026-04-08 01:35:56.209502 | TASK [Clean the cloud environment] 2026-04-08 01:35:57.404897 | orchestrator | 2026-04-08 01:35:57 - clean up servers 2026-04-08 01:35:57.918764 | orchestrator | 2026-04-08 01:35:57 - clean up keypairs 2026-04-08 01:35:57.938085 | orchestrator | 2026-04-08 01:35:57 - wait for servers to be gone 2026-04-08 01:35:57.983053 | orchestrator | 2026-04-08 01:35:57 - clean up ports 2026-04-08 01:35:58.067829 | orchestrator | 2026-04-08 01:35:58 - clean up volumes 2026-04-08 01:35:58.145538 | orchestrator | 2026-04-08 01:35:58 - disconnect routers 2026-04-08 01:35:58.172986 | orchestrator | 2026-04-08 01:35:58 - clean up subnets 2026-04-08 01:35:58.190943 | orchestrator | 2026-04-08 01:35:58 - clean up networks 2026-04-08 01:35:58.348182 | orchestrator | 2026-04-08 01:35:58 - clean up security groups 2026-04-08 01:35:58.386273 | orchestrator | 2026-04-08 01:35:58 - clean up floating ips 2026-04-08 01:35:58.409622 | orchestrator | 2026-04-08 01:35:58 - clean up routers 2026-04-08 01:35:58.746526 | orchestrator | ok: Runtime: 0:00:01.453243 2026-04-08 01:35:58.750423 | 2026-04-08 01:35:58.750581 | PLAY RECAP 2026-04-08 01:35:58.750932 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-08 01:35:58.751032 | 2026-04-08 01:35:58.875838 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-08 01:35:58.876932 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-08 01:35:59.690918 | 2026-04-08 01:35:59.691080 | PLAY [Base post-fetch] 2026-04-08 01:35:59.706891 | 2026-04-08 01:35:59.707054 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-08 01:35:59.772925 | orchestrator | skipping: Conditional result was False 2026-04-08 01:35:59.788667 | 2026-04-08 01:35:59.788949 | TASK [fetch-output : Set log path for single node] 2026-04-08 01:35:59.838445 | orchestrator | ok 2026-04-08 01:35:59.848826 | 2026-04-08 01:35:59.848987 | LOOP [fetch-output : Ensure local output dirs] 2026-04-08 01:36:00.370281 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/9fcaa67de16142939af440d960a751f3/work/logs" 2026-04-08 01:36:00.664823 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/9fcaa67de16142939af440d960a751f3/work/artifacts" 2026-04-08 01:36:00.936559 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/9fcaa67de16142939af440d960a751f3/work/docs" 2026-04-08 01:36:00.954328 | 2026-04-08 01:36:00.954472 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-08 01:36:01.869067 | orchestrator | changed: .d..t...... ./ 2026-04-08 01:36:01.869412 | orchestrator | changed: All items complete 2026-04-08 01:36:01.869464 | 2026-04-08 01:36:02.591010 | orchestrator | changed: .d..t...... ./ 2026-04-08 01:36:03.344580 | orchestrator | changed: .d..t...... ./ 2026-04-08 01:36:03.375350 | 2026-04-08 01:36:03.375516 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-08 01:36:03.417860 | orchestrator | skipping: Conditional result was False 2026-04-08 01:36:03.423876 | orchestrator | skipping: Conditional result was False 2026-04-08 01:36:03.446504 | 2026-04-08 01:36:03.446647 | PLAY RECAP 2026-04-08 01:36:03.446751 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-08 01:36:03.446793 | 2026-04-08 01:36:03.578487 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-08 01:36:03.579600 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-08 01:36:04.349369 | 2026-04-08 01:36:04.349532 | PLAY [Base post] 2026-04-08 01:36:04.364099 | 2026-04-08 01:36:04.364234 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-08 01:36:05.421260 | orchestrator | changed 2026-04-08 01:36:05.429038 | 2026-04-08 01:36:05.429159 | PLAY RECAP 2026-04-08 01:36:05.429221 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-08 01:36:05.429283 | 2026-04-08 01:36:05.550995 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-08 01:36:05.553552 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-08 01:36:06.383709 | 2026-04-08 01:36:06.383924 | PLAY [Base post-logs] 2026-04-08 01:36:06.395154 | 2026-04-08 01:36:06.395299 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-08 01:36:06.876948 | localhost | changed 2026-04-08 01:36:06.894632 | 2026-04-08 01:36:06.894875 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-08 01:36:06.935195 | localhost | ok 2026-04-08 01:36:06.944982 | 2026-04-08 01:36:06.945332 | TASK [Set zuul-log-path fact] 2026-04-08 01:36:06.975706 | localhost | ok 2026-04-08 01:36:06.990028 | 2026-04-08 01:36:06.990203 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-08 01:36:07.028211 | localhost | ok 2026-04-08 01:36:07.034933 | 2026-04-08 01:36:07.035114 | TASK [upload-logs : Create log directories] 2026-04-08 01:36:07.536371 | localhost | changed 2026-04-08 01:36:07.539282 | 2026-04-08 01:36:07.539398 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-08 01:36:08.030420 | localhost -> localhost | ok: Runtime: 0:00:00.007516 2026-04-08 01:36:08.038985 | 2026-04-08 01:36:08.039151 | TASK [upload-logs : Upload logs to log server] 2026-04-08 01:36:08.596115 | localhost | Output suppressed because no_log was given 2026-04-08 01:36:08.599190 | 2026-04-08 01:36:08.599346 | LOOP [upload-logs : Compress console log and json output] 2026-04-08 01:36:08.656329 | localhost | skipping: Conditional result was False 2026-04-08 01:36:08.664298 | localhost | skipping: Conditional result was False 2026-04-08 01:36:08.672136 | 2026-04-08 01:36:08.672375 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-08 01:36:08.730598 | localhost | skipping: Conditional result was False 2026-04-08 01:36:08.731065 | 2026-04-08 01:36:08.735407 | localhost | skipping: Conditional result was False 2026-04-08 01:36:08.744905 | 2026-04-08 01:36:08.745158 | LOOP [upload-logs : Upload console log and json output]